00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2413 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3678 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.155 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.156 The recommended git tool is: git 00:00:00.156 using credential 00000000-0000-0000-0000-000000000002 00:00:00.158 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.199 Fetching changes from the remote Git repository 00:00:00.201 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.238 Using shallow fetch with depth 1 00:00:00.238 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.238 > git --version # timeout=10 00:00:00.270 > git --version # 'git version 2.39.2' 00:00:00.270 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.295 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.295 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.600 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.611 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.625 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.625 > git config core.sparsecheckout # timeout=10 00:00:06.636 > git read-tree -mu HEAD # timeout=10 00:00:06.657 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.680 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.680 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.767 [Pipeline] Start of Pipeline 00:00:06.780 [Pipeline] library 00:00:06.782 Loading library shm_lib@master 00:00:06.782 Library shm_lib@master is cached. Copying from home. 00:00:06.795 [Pipeline] node 00:00:06.808 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.810 [Pipeline] { 00:00:06.819 [Pipeline] catchError 00:00:06.820 [Pipeline] { 00:00:06.833 [Pipeline] wrap 00:00:06.842 [Pipeline] { 00:00:06.852 [Pipeline] stage 00:00:06.854 [Pipeline] { (Prologue) 00:00:06.873 [Pipeline] echo 00:00:06.875 Node: VM-host-SM38 00:00:06.882 [Pipeline] cleanWs 00:00:06.893 [WS-CLEANUP] Deleting project workspace... 00:00:06.893 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.899 [WS-CLEANUP] done 00:00:07.100 [Pipeline] setCustomBuildProperty 00:00:07.192 [Pipeline] httpRequest 00:00:08.784 [Pipeline] echo 00:00:08.785 Sorcerer 10.211.164.101 is alive 00:00:08.792 [Pipeline] retry 00:00:08.793 [Pipeline] { 00:00:08.802 [Pipeline] httpRequest 00:00:08.807 HttpMethod: GET 00:00:08.808 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.808 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.826 Response Code: HTTP/1.1 200 OK 00:00:08.827 Success: Status code 200 is in the accepted range: 200,404 00:00:08.827 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:27.891 [Pipeline] } 00:00:27.909 [Pipeline] // retry 00:00:27.917 [Pipeline] sh 00:00:28.204 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:28.223 [Pipeline] httpRequest 00:00:28.632 [Pipeline] echo 00:00:28.634 Sorcerer 10.211.164.101 is alive 00:00:28.644 [Pipeline] retry 00:00:28.646 [Pipeline] { 00:00:28.658 [Pipeline] httpRequest 00:00:28.663 HttpMethod: GET 00:00:28.664 URL: http://10.211.164.101/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:28.665 Sending request to url: http://10.211.164.101/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:28.673 Response Code: HTTP/1.1 200 OK 00:00:28.673 Success: Status code 200 is in the accepted range: 200,404 00:00:28.674 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:56.005 [Pipeline] } 00:01:56.019 [Pipeline] // retry 00:01:56.027 [Pipeline] sh 00:01:56.310 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:59.626 [Pipeline] sh 00:01:59.908 + git -C spdk log --oneline -n5 00:01:59.908 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:59.908 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:59.908 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:59.909 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:01:59.909 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:59.926 [Pipeline] withCredentials 00:01:59.937 > git --version # timeout=10 00:01:59.947 > git --version # 'git version 2.39.2' 00:01:59.965 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:59.967 [Pipeline] { 00:01:59.975 [Pipeline] retry 00:01:59.977 [Pipeline] { 00:01:59.991 [Pipeline] sh 00:02:00.274 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:02:00.287 [Pipeline] } 00:02:00.333 [Pipeline] // retry 00:02:00.338 [Pipeline] } 00:02:00.358 [Pipeline] // withCredentials 00:02:00.369 [Pipeline] httpRequest 00:02:01.137 [Pipeline] echo 00:02:01.138 Sorcerer 10.211.164.101 is alive 00:02:01.149 [Pipeline] retry 00:02:01.152 [Pipeline] { 00:02:01.167 [Pipeline] httpRequest 00:02:01.172 HttpMethod: GET 00:02:01.172 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:01.173 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:01.186 Response Code: HTTP/1.1 200 OK 00:02:01.186 Success: Status code 200 is in the accepted range: 200,404 00:02:01.187 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:06.554 [Pipeline] } 00:02:06.571 [Pipeline] // retry 00:02:06.578 [Pipeline] sh 00:02:06.861 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:08.788 [Pipeline] sh 00:02:09.071 + git -C dpdk log --oneline -n5 00:02:09.071 caf0f5d395 version: 22.11.4 00:02:09.071 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:09.071 dc9c799c7d vhost: fix missing spinlock unlock 00:02:09.071 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:09.071 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:09.092 [Pipeline] writeFile 00:02:09.108 [Pipeline] sh 00:02:09.391 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:09.405 [Pipeline] sh 00:02:09.689 + cat autorun-spdk.conf 00:02:09.689 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.689 SPDK_TEST_NVME=1 00:02:09.689 SPDK_TEST_FTL=1 00:02:09.689 SPDK_TEST_ISAL=1 00:02:09.689 SPDK_RUN_ASAN=1 00:02:09.689 SPDK_RUN_UBSAN=1 00:02:09.689 SPDK_TEST_XNVME=1 00:02:09.689 SPDK_TEST_NVME_FDP=1 00:02:09.689 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:09.689 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:09.689 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.697 RUN_NIGHTLY=1 00:02:09.699 [Pipeline] } 00:02:09.715 [Pipeline] // stage 00:02:09.731 [Pipeline] stage 00:02:09.733 [Pipeline] { (Run VM) 00:02:09.749 [Pipeline] sh 00:02:10.039 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:10.039 + echo 'Start stage prepare_nvme.sh' 00:02:10.039 Start stage prepare_nvme.sh 00:02:10.039 + [[ -n 10 ]] 00:02:10.039 + disk_prefix=ex10 00:02:10.039 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:10.039 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:10.039 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:10.039 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.039 ++ SPDK_TEST_NVME=1 00:02:10.039 ++ SPDK_TEST_FTL=1 00:02:10.039 ++ SPDK_TEST_ISAL=1 00:02:10.039 ++ SPDK_RUN_ASAN=1 00:02:10.039 ++ SPDK_RUN_UBSAN=1 00:02:10.039 ++ SPDK_TEST_XNVME=1 00:02:10.039 ++ SPDK_TEST_NVME_FDP=1 00:02:10.039 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:10.039 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:10.039 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.039 ++ RUN_NIGHTLY=1 00:02:10.039 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:10.039 + nvme_files=() 00:02:10.039 + declare -A nvme_files 00:02:10.039 + backend_dir=/var/lib/libvirt/images/backends 00:02:10.039 + nvme_files['nvme.img']=5G 00:02:10.039 + nvme_files['nvme-cmb.img']=5G 00:02:10.039 + nvme_files['nvme-multi0.img']=4G 00:02:10.039 + nvme_files['nvme-multi1.img']=4G 00:02:10.039 + nvme_files['nvme-multi2.img']=4G 00:02:10.039 + nvme_files['nvme-openstack.img']=8G 00:02:10.040 + nvme_files['nvme-zns.img']=5G 00:02:10.040 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:10.040 + (( SPDK_TEST_FTL == 1 )) 00:02:10.040 + nvme_files["nvme-ftl.img"]=6G 00:02:10.040 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:10.040 + nvme_files["nvme-fdp.img"]=1G 00:02:10.040 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:10.040 + for nvme in "${!nvme_files[@]}" 00:02:10.040 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:02:10.040 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:10.040 + for nvme in "${!nvme_files[@]}" 00:02:10.040 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:02:10.040 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:10.040 + for nvme in "${!nvme_files[@]}" 00:02:10.040 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:02:10.300 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:10.300 + for nvme in "${!nvme_files[@]}" 00:02:10.300 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:02:10.300 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:10.300 + for nvme in "${!nvme_files[@]}" 00:02:10.300 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:02:10.561 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:10.561 + for nvme in "${!nvme_files[@]}" 00:02:10.561 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:02:10.561 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:10.561 + for nvme in "${!nvme_files[@]}" 00:02:10.561 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:02:10.561 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:10.561 + for nvme in "${!nvme_files[@]}" 00:02:10.561 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:02:10.561 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:10.561 + for nvme in "${!nvme_files[@]}" 00:02:10.561 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:02:11.130 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:11.130 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:02:11.130 + echo 'End stage prepare_nvme.sh' 00:02:11.130 End stage prepare_nvme.sh 00:02:11.143 [Pipeline] sh 00:02:11.428 + DISTRO=fedora39 00:02:11.428 + CPUS=10 00:02:11.428 + RAM=12288 00:02:11.428 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:11.428 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:11.428 00:02:11.428 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:11.428 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:11.428 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:11.428 HELP=0 00:02:11.428 DRY_RUN=0 00:02:11.428 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:02:11.428 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:11.428 NVME_AUTO_CREATE=0 00:02:11.428 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:02:11.428 NVME_CMB=,,,, 00:02:11.428 NVME_PMR=,,,, 00:02:11.428 NVME_ZNS=,,,, 00:02:11.428 NVME_MS=true,,,, 00:02:11.428 NVME_FDP=,,,on, 00:02:11.428 SPDK_VAGRANT_DISTRO=fedora39 00:02:11.428 SPDK_VAGRANT_VMCPU=10 00:02:11.428 SPDK_VAGRANT_VMRAM=12288 00:02:11.428 SPDK_VAGRANT_PROVIDER=libvirt 00:02:11.428 SPDK_VAGRANT_HTTP_PROXY= 00:02:11.428 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:11.428 SPDK_OPENSTACK_NETWORK=0 00:02:11.428 VAGRANT_PACKAGE_BOX=0 00:02:11.428 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:11.428 FORCE_DISTRO=true 00:02:11.428 VAGRANT_BOX_VERSION= 00:02:11.428 EXTRA_VAGRANTFILES= 00:02:11.428 NIC_MODEL=e1000 00:02:11.428 00:02:11.428 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:11.428 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:13.972 Bringing machine 'default' up with 'libvirt' provider... 00:02:14.234 ==> default: Creating image (snapshot of base box volume). 00:02:14.234 ==> default: Creating domain with the following settings... 00:02:14.234 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732874873_50f2ce259b333c608c8e 00:02:14.234 ==> default: -- Domain type: kvm 00:02:14.234 ==> default: -- Cpus: 10 00:02:14.234 ==> default: -- Feature: acpi 00:02:14.234 ==> default: -- Feature: apic 00:02:14.234 ==> default: -- Feature: pae 00:02:14.234 ==> default: -- Memory: 12288M 00:02:14.234 ==> default: -- Memory Backing: hugepages: 00:02:14.234 ==> default: -- Management MAC: 00:02:14.234 ==> default: -- Loader: 00:02:14.234 ==> default: -- Nvram: 00:02:14.234 ==> default: -- Base box: spdk/fedora39 00:02:14.234 ==> default: -- Storage pool: default 00:02:14.234 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732874873_50f2ce259b333c608c8e.img (20G) 00:02:14.234 ==> default: -- Volume Cache: default 00:02:14.234 ==> default: -- Kernel: 00:02:14.234 ==> default: -- Initrd: 00:02:14.234 ==> default: -- Graphics Type: vnc 00:02:14.234 ==> default: -- Graphics Port: -1 00:02:14.234 ==> default: -- Graphics IP: 127.0.0.1 00:02:14.234 ==> default: -- Graphics Password: Not defined 00:02:14.234 ==> default: -- Video Type: cirrus 00:02:14.234 ==> default: -- Video VRAM: 9216 00:02:14.234 ==> default: -- Sound Type: 00:02:14.234 ==> default: -- Keymap: en-us 00:02:14.234 ==> default: -- TPM Path: 00:02:14.234 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:14.234 ==> default: -- Command line args: 00:02:14.234 ==> default: -> value=-device, 00:02:14.234 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:14.234 ==> default: -> value=-drive, 00:02:14.234 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:14.234 ==> default: -> value=-device, 00:02:14.234 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:14.234 ==> default: -> value=-device, 00:02:14.234 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:14.234 ==> default: -> value=-drive, 00:02:14.234 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:02:14.234 ==> default: -> value=-device, 00:02:14.234 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:14.234 ==> default: -> value=-device, 00:02:14.234 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:14.234 ==> default: -> value=-drive, 00:02:14.234 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:14.234 ==> default: -> value=-device, 00:02:14.234 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:14.234 ==> default: -> value=-drive, 00:02:14.234 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:14.234 ==> default: -> value=-device, 00:02:14.234 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:14.234 ==> default: -> value=-drive, 00:02:14.234 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:14.234 ==> default: -> value=-device, 00:02:14.234 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:14.234 ==> default: -> value=-device, 00:02:14.234 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:14.234 ==> default: -> value=-device, 00:02:14.234 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:14.234 ==> default: -> value=-drive, 00:02:14.234 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:14.234 ==> default: -> value=-device, 00:02:14.234 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:14.234 ==> default: Creating shared folders metadata... 00:02:14.234 ==> default: Starting domain. 00:02:15.620 ==> default: Waiting for domain to get an IP address... 00:02:33.738 ==> default: Waiting for SSH to become available... 00:02:33.738 ==> default: Configuring and enabling network interfaces... 00:02:37.059 default: SSH address: 192.168.121.129:22 00:02:37.059 default: SSH username: vagrant 00:02:37.059 default: SSH auth method: private key 00:02:39.085 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:47.232 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:53.827 ==> default: Mounting SSHFS shared folder... 00:02:55.234 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:55.234 ==> default: Checking Mount.. 00:02:56.654 ==> default: Folder Successfully Mounted! 00:02:56.654 00:02:56.654 SUCCESS! 00:02:56.654 00:02:56.654 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:56.654 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:56.654 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:56.654 00:02:56.664 [Pipeline] } 00:02:56.680 [Pipeline] // stage 00:02:56.689 [Pipeline] dir 00:02:56.689 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:56.691 [Pipeline] { 00:02:56.704 [Pipeline] catchError 00:02:56.705 [Pipeline] { 00:02:56.717 [Pipeline] sh 00:02:57.001 + vagrant ssh-config --host vagrant 00:02:57.001 + sed -ne '/^Host/,$p' 00:02:57.001 + tee ssh_conf 00:03:00.294 Host vagrant 00:03:00.294 HostName 192.168.121.129 00:03:00.294 User vagrant 00:03:00.294 Port 22 00:03:00.294 UserKnownHostsFile /dev/null 00:03:00.294 StrictHostKeyChecking no 00:03:00.294 PasswordAuthentication no 00:03:00.294 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:00.294 IdentitiesOnly yes 00:03:00.294 LogLevel FATAL 00:03:00.294 ForwardAgent yes 00:03:00.294 ForwardX11 yes 00:03:00.294 00:03:00.308 [Pipeline] withEnv 00:03:00.311 [Pipeline] { 00:03:00.325 [Pipeline] sh 00:03:00.603 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:03:00.603 source /etc/os-release 00:03:00.603 [[ -e /image.version ]] && img=$(< /image.version) 00:03:00.603 # Minimal, systemd-like check. 00:03:00.603 if [[ -e /.dockerenv ]]; then 00:03:00.603 # Clear garbage from the node'\''s name: 00:03:00.603 # agt-er_autotest_547-896 -> autotest_547-896 00:03:00.603 # $HOSTNAME is the actual container id 00:03:00.603 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:00.603 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:00.603 # We can assume this is a mount from a host where container is running, 00:03:00.603 # so fetch its hostname to easily identify the target swarm worker. 00:03:00.603 container="$(< /etc/hostname) ($agent)" 00:03:00.603 else 00:03:00.603 # Fallback 00:03:00.603 container=$agent 00:03:00.603 fi 00:03:00.603 fi 00:03:00.603 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:00.603 ' 00:03:00.613 [Pipeline] } 00:03:00.630 [Pipeline] // withEnv 00:03:00.639 [Pipeline] setCustomBuildProperty 00:03:00.654 [Pipeline] stage 00:03:00.657 [Pipeline] { (Tests) 00:03:00.674 [Pipeline] sh 00:03:00.951 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:00.964 [Pipeline] sh 00:03:01.241 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:01.256 [Pipeline] timeout 00:03:01.256 Timeout set to expire in 50 min 00:03:01.258 [Pipeline] { 00:03:01.273 [Pipeline] sh 00:03:01.551 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:03:01.824 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:03:01.835 [Pipeline] sh 00:03:02.111 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:03:02.383 [Pipeline] sh 00:03:02.660 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:02.673 [Pipeline] sh 00:03:02.949 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:03:02.949 ++ readlink -f spdk_repo 00:03:02.949 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:02.949 + [[ -n /home/vagrant/spdk_repo ]] 00:03:02.949 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:02.949 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:02.949 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:02.949 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:02.949 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:02.949 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:02.949 + cd /home/vagrant/spdk_repo 00:03:02.949 + source /etc/os-release 00:03:02.949 ++ NAME='Fedora Linux' 00:03:02.949 ++ VERSION='39 (Cloud Edition)' 00:03:02.949 ++ ID=fedora 00:03:02.949 ++ VERSION_ID=39 00:03:02.949 ++ VERSION_CODENAME= 00:03:02.949 ++ PLATFORM_ID=platform:f39 00:03:02.949 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:02.949 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:02.949 ++ LOGO=fedora-logo-icon 00:03:02.949 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:02.949 ++ HOME_URL=https://fedoraproject.org/ 00:03:02.949 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:02.949 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:02.949 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:02.949 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:02.949 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:02.949 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:02.949 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:02.949 ++ SUPPORT_END=2024-11-12 00:03:02.949 ++ VARIANT='Cloud Edition' 00:03:02.949 ++ VARIANT_ID=cloud 00:03:02.949 + uname -a 00:03:02.949 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:02.949 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:03.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:03.517 Hugepages 00:03:03.517 node hugesize free / total 00:03:03.517 node0 1048576kB 0 / 0 00:03:03.517 node0 2048kB 0 / 0 00:03:03.517 00:03:03.517 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:03.517 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:03.517 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:03.775 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:03.775 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:03.775 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:03.775 + rm -f /tmp/spdk-ld-path 00:03:03.775 + source autorun-spdk.conf 00:03:03.775 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:03.775 ++ SPDK_TEST_NVME=1 00:03:03.775 ++ SPDK_TEST_FTL=1 00:03:03.775 ++ SPDK_TEST_ISAL=1 00:03:03.775 ++ SPDK_RUN_ASAN=1 00:03:03.775 ++ SPDK_RUN_UBSAN=1 00:03:03.775 ++ SPDK_TEST_XNVME=1 00:03:03.775 ++ SPDK_TEST_NVME_FDP=1 00:03:03.775 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:03.775 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:03.775 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:03.775 ++ RUN_NIGHTLY=1 00:03:03.775 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:03.775 + [[ -n '' ]] 00:03:03.775 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:03.775 + for M in /var/spdk/build-*-manifest.txt 00:03:03.775 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:03.775 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.775 + for M in /var/spdk/build-*-manifest.txt 00:03:03.775 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:03.775 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.775 + for M in /var/spdk/build-*-manifest.txt 00:03:03.775 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:03.775 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.775 ++ uname 00:03:03.775 + [[ Linux == \L\i\n\u\x ]] 00:03:03.775 + sudo dmesg -T 00:03:03.775 + sudo dmesg --clear 00:03:03.775 + dmesg_pid=5772 00:03:03.775 + [[ Fedora Linux == FreeBSD ]] 00:03:03.775 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:03.775 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:03.775 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:03.775 + [[ -x /usr/src/fio-static/fio ]] 00:03:03.775 + sudo dmesg -Tw 00:03:03.775 + export FIO_BIN=/usr/src/fio-static/fio 00:03:03.775 + FIO_BIN=/usr/src/fio-static/fio 00:03:03.775 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:03.775 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:03.775 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:03.775 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:03.775 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:03.775 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:03.775 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:03.775 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:03.775 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:03.775 10:08:43 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:03.775 10:08:43 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:03.775 10:08:43 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:03.775 10:08:43 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:03.775 10:08:43 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:03.775 10:08:43 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:03.775 10:08:43 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:03.775 10:08:43 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:03.775 10:08:43 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:03.775 10:08:43 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:03.775 10:08:43 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:03.775 10:08:43 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:03.775 10:08:43 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:03.775 10:08:43 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:03:03.775 10:08:43 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:03.775 10:08:43 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:03.775 10:08:43 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:03.775 10:08:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:03.775 10:08:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:03.775 10:08:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:03.775 10:08:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:03.775 10:08:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:03.775 10:08:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.775 10:08:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.775 10:08:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.775 10:08:43 -- paths/export.sh@5 -- $ export PATH 00:03:03.775 10:08:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.775 10:08:43 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:03.775 10:08:43 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:04.034 10:08:43 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732874923.XXXXXX 00:03:04.034 10:08:43 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732874923.5O8H29 00:03:04.034 10:08:43 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:04.034 10:08:43 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:03:04.034 10:08:43 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:04.034 10:08:43 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:03:04.034 10:08:43 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:04.034 10:08:43 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:04.034 10:08:43 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:04.034 10:08:43 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:04.034 10:08:43 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.035 10:08:43 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:03:04.035 10:08:43 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:04.035 10:08:43 -- pm/common@17 -- $ local monitor 00:03:04.035 10:08:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.035 10:08:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.035 10:08:43 -- pm/common@25 -- $ sleep 1 00:03:04.035 10:08:43 -- pm/common@21 -- $ date +%s 00:03:04.035 10:08:43 -- pm/common@21 -- $ date +%s 00:03:04.035 10:08:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732874923 00:03:04.035 10:08:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732874923 00:03:04.035 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732874923_collect-vmstat.pm.log 00:03:04.035 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732874923_collect-cpu-load.pm.log 00:03:04.972 10:08:44 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:04.972 10:08:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:04.972 10:08:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:04.972 10:08:44 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:04.972 10:08:44 -- spdk/autobuild.sh@16 -- $ date -u 00:03:04.972 Fri Nov 29 10:08:44 AM UTC 2024 00:03:04.972 10:08:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:04.972 v25.01-pre-276-g35cd3e84d 00:03:04.972 10:08:44 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:04.972 10:08:44 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:04.972 10:08:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:04.972 10:08:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:04.972 10:08:44 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.972 ************************************ 00:03:04.972 START TEST asan 00:03:04.972 ************************************ 00:03:04.972 using asan 00:03:04.972 10:08:44 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:04.972 00:03:04.972 real 0m0.000s 00:03:04.972 user 0m0.000s 00:03:04.972 sys 0m0.000s 00:03:04.972 10:08:44 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:04.972 ************************************ 00:03:04.972 END TEST asan 00:03:04.972 10:08:44 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:04.972 ************************************ 00:03:04.972 10:08:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:04.972 10:08:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:04.972 10:08:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:04.972 10:08:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:04.972 10:08:44 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.972 ************************************ 00:03:04.972 START TEST ubsan 00:03:04.972 ************************************ 00:03:04.972 using ubsan 00:03:04.972 10:08:44 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:04.972 00:03:04.972 real 0m0.000s 00:03:04.972 user 0m0.000s 00:03:04.972 sys 0m0.000s 00:03:04.972 10:08:44 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:04.972 10:08:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:04.972 ************************************ 00:03:04.972 END TEST ubsan 00:03:04.972 ************************************ 00:03:04.972 10:08:44 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:03:04.972 10:08:44 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:04.972 10:08:44 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:04.972 10:08:44 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:03:04.972 10:08:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:04.972 10:08:44 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.972 ************************************ 00:03:04.972 START TEST build_native_dpdk 00:03:04.972 ************************************ 00:03:04.972 10:08:44 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:04.972 caf0f5d395 version: 22.11.4 00:03:04.972 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:03:04.972 dc9c799c7d vhost: fix missing spinlock unlock 00:03:04.972 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:03:04.972 6ef77f2a5e net/gve: fix RX buffer size alignment 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:03:04.972 patching file config/rte_config.h 00:03:04.972 Hunk #1 succeeded at 60 (offset 1 line). 00:03:04.972 10:08:44 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:04.972 10:08:44 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:04.973 10:08:44 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:03:04.973 patching file lib/pcapng/rte_pcapng.c 00:03:04.973 Hunk #1 succeeded at 110 (offset -18 lines). 00:03:04.973 10:08:44 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:04.973 10:08:44 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:04.973 10:08:44 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:03:04.973 10:08:44 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:04.973 10:08:44 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:03:04.973 10:08:44 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:03:04.973 10:08:44 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:09.156 The Meson build system 00:03:09.156 Version: 1.5.0 00:03:09.156 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:09.156 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:09.156 Build type: native build 00:03:09.156 Program cat found: YES (/usr/bin/cat) 00:03:09.156 Project name: DPDK 00:03:09.156 Project version: 22.11.4 00:03:09.156 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:09.156 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:09.156 Host machine cpu family: x86_64 00:03:09.156 Host machine cpu: x86_64 00:03:09.156 Message: ## Building in Developer Mode ## 00:03:09.156 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:09.156 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:09.156 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:09.156 Program objdump found: YES (/usr/bin/objdump) 00:03:09.156 Program python3 found: YES (/usr/bin/python3) 00:03:09.156 Program cat found: YES (/usr/bin/cat) 00:03:09.157 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:09.157 Checking for size of "void *" : 8 00:03:09.157 Checking for size of "void *" : 8 (cached) 00:03:09.157 Library m found: YES 00:03:09.157 Library numa found: YES 00:03:09.157 Has header "numaif.h" : YES 00:03:09.157 Library fdt found: NO 00:03:09.157 Library execinfo found: NO 00:03:09.157 Has header "execinfo.h" : YES 00:03:09.157 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:09.157 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:09.157 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:09.157 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:09.157 Run-time dependency openssl found: YES 3.1.1 00:03:09.157 Run-time dependency libpcap found: YES 1.10.4 00:03:09.157 Has header "pcap.h" with dependency libpcap: YES 00:03:09.157 Compiler for C supports arguments -Wcast-qual: YES 00:03:09.157 Compiler for C supports arguments -Wdeprecated: YES 00:03:09.157 Compiler for C supports arguments -Wformat: YES 00:03:09.157 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:09.157 Compiler for C supports arguments -Wformat-security: NO 00:03:09.157 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:09.157 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:09.157 Compiler for C supports arguments -Wnested-externs: YES 00:03:09.157 Compiler for C supports arguments -Wold-style-definition: YES 00:03:09.157 Compiler for C supports arguments -Wpointer-arith: YES 00:03:09.157 Compiler for C supports arguments -Wsign-compare: YES 00:03:09.157 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:09.157 Compiler for C supports arguments -Wundef: YES 00:03:09.157 Compiler for C supports arguments -Wwrite-strings: YES 00:03:09.157 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:09.157 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:09.157 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:09.157 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:09.157 Compiler for C supports arguments -mavx512f: YES 00:03:09.157 Checking if "AVX512 checking" compiles: YES 00:03:09.157 Fetching value of define "__SSE4_2__" : 1 00:03:09.157 Fetching value of define "__AES__" : 1 00:03:09.157 Fetching value of define "__AVX__" : 1 00:03:09.157 Fetching value of define "__AVX2__" : 1 00:03:09.157 Fetching value of define "__AVX512BW__" : 1 00:03:09.157 Fetching value of define "__AVX512CD__" : 1 00:03:09.157 Fetching value of define "__AVX512DQ__" : 1 00:03:09.157 Fetching value of define "__AVX512F__" : 1 00:03:09.157 Fetching value of define "__AVX512VL__" : 1 00:03:09.157 Fetching value of define "__PCLMUL__" : 1 00:03:09.157 Fetching value of define "__RDRND__" : 1 00:03:09.157 Fetching value of define "__RDSEED__" : 1 00:03:09.157 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:09.157 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:09.157 Message: lib/kvargs: Defining dependency "kvargs" 00:03:09.157 Message: lib/telemetry: Defining dependency "telemetry" 00:03:09.157 Checking for function "getentropy" : YES 00:03:09.157 Message: lib/eal: Defining dependency "eal" 00:03:09.157 Message: lib/ring: Defining dependency "ring" 00:03:09.157 Message: lib/rcu: Defining dependency "rcu" 00:03:09.157 Message: lib/mempool: Defining dependency "mempool" 00:03:09.157 Message: lib/mbuf: Defining dependency "mbuf" 00:03:09.157 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:09.157 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:09.157 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:09.157 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:09.157 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:09.157 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:09.157 Compiler for C supports arguments -mpclmul: YES 00:03:09.157 Compiler for C supports arguments -maes: YES 00:03:09.157 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:09.157 Compiler for C supports arguments -mavx512bw: YES 00:03:09.157 Compiler for C supports arguments -mavx512dq: YES 00:03:09.157 Compiler for C supports arguments -mavx512vl: YES 00:03:09.157 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:09.157 Compiler for C supports arguments -mavx2: YES 00:03:09.157 Compiler for C supports arguments -mavx: YES 00:03:09.157 Message: lib/net: Defining dependency "net" 00:03:09.157 Message: lib/meter: Defining dependency "meter" 00:03:09.157 Message: lib/ethdev: Defining dependency "ethdev" 00:03:09.157 Message: lib/pci: Defining dependency "pci" 00:03:09.157 Message: lib/cmdline: Defining dependency "cmdline" 00:03:09.157 Message: lib/metrics: Defining dependency "metrics" 00:03:09.157 Message: lib/hash: Defining dependency "hash" 00:03:09.157 Message: lib/timer: Defining dependency "timer" 00:03:09.157 Fetching value of define "__AVX2__" : 1 (cached) 00:03:09.157 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:09.157 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:09.157 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:09.157 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:09.157 Message: lib/acl: Defining dependency "acl" 00:03:09.157 Message: lib/bbdev: Defining dependency "bbdev" 00:03:09.157 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:09.157 Run-time dependency libelf found: YES 0.191 00:03:09.157 Message: lib/bpf: Defining dependency "bpf" 00:03:09.157 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:09.157 Message: lib/compressdev: Defining dependency "compressdev" 00:03:09.157 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:09.157 Message: lib/distributor: Defining dependency "distributor" 00:03:09.157 Message: lib/efd: Defining dependency "efd" 00:03:09.157 Message: lib/eventdev: Defining dependency "eventdev" 00:03:09.157 Message: lib/gpudev: Defining dependency "gpudev" 00:03:09.157 Message: lib/gro: Defining dependency "gro" 00:03:09.157 Message: lib/gso: Defining dependency "gso" 00:03:09.157 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:09.157 Message: lib/jobstats: Defining dependency "jobstats" 00:03:09.157 Message: lib/latencystats: Defining dependency "latencystats" 00:03:09.157 Message: lib/lpm: Defining dependency "lpm" 00:03:09.157 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:09.157 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:09.157 Fetching value of define "__AVX512IFMA__" : 1 00:03:09.157 Message: lib/member: Defining dependency "member" 00:03:09.157 Message: lib/pcapng: Defining dependency "pcapng" 00:03:09.157 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:09.157 Message: lib/power: Defining dependency "power" 00:03:09.157 Message: lib/rawdev: Defining dependency "rawdev" 00:03:09.157 Message: lib/regexdev: Defining dependency "regexdev" 00:03:09.157 Message: lib/dmadev: Defining dependency "dmadev" 00:03:09.157 Message: lib/rib: Defining dependency "rib" 00:03:09.157 Message: lib/reorder: Defining dependency "reorder" 00:03:09.157 Message: lib/sched: Defining dependency "sched" 00:03:09.157 Message: lib/security: Defining dependency "security" 00:03:09.157 Message: lib/stack: Defining dependency "stack" 00:03:09.157 Has header "linux/userfaultfd.h" : YES 00:03:09.157 Message: lib/vhost: Defining dependency "vhost" 00:03:09.157 Message: lib/ipsec: Defining dependency "ipsec" 00:03:09.157 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:09.157 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:09.157 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:09.157 Message: lib/fib: Defining dependency "fib" 00:03:09.157 Message: lib/port: Defining dependency "port" 00:03:09.157 Message: lib/pdump: Defining dependency "pdump" 00:03:09.157 Message: lib/table: Defining dependency "table" 00:03:09.157 Message: lib/pipeline: Defining dependency "pipeline" 00:03:09.157 Message: lib/graph: Defining dependency "graph" 00:03:09.157 Message: lib/node: Defining dependency "node" 00:03:09.157 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:09.157 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:09.157 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:09.157 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:09.157 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:09.157 Compiler for C supports arguments -Wno-unused-value: YES 00:03:09.157 Compiler for C supports arguments -Wno-format: YES 00:03:09.157 Compiler for C supports arguments -Wno-format-security: YES 00:03:09.157 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:09.157 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:09.157 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:09.157 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:10.090 Fetching value of define "__AVX2__" : 1 (cached) 00:03:10.090 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:10.090 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:10.090 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:10.090 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:10.090 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:10.090 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:10.090 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:10.090 Configuring doxy-api.conf using configuration 00:03:10.090 Program sphinx-build found: NO 00:03:10.090 Configuring rte_build_config.h using configuration 00:03:10.090 Message: 00:03:10.090 ================= 00:03:10.090 Applications Enabled 00:03:10.090 ================= 00:03:10.090 00:03:10.090 apps: 00:03:10.090 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:03:10.090 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:03:10.090 test-security-perf, 00:03:10.090 00:03:10.090 Message: 00:03:10.090 ================= 00:03:10.090 Libraries Enabled 00:03:10.090 ================= 00:03:10.090 00:03:10.090 libs: 00:03:10.090 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:03:10.090 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:03:10.090 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:03:10.090 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:03:10.090 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:03:10.090 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:03:10.090 table, pipeline, graph, node, 00:03:10.090 00:03:10.090 Message: 00:03:10.090 =============== 00:03:10.090 Drivers Enabled 00:03:10.090 =============== 00:03:10.090 00:03:10.090 common: 00:03:10.090 00:03:10.090 bus: 00:03:10.090 pci, vdev, 00:03:10.090 mempool: 00:03:10.090 ring, 00:03:10.090 dma: 00:03:10.090 00:03:10.090 net: 00:03:10.090 i40e, 00:03:10.090 raw: 00:03:10.090 00:03:10.090 crypto: 00:03:10.090 00:03:10.090 compress: 00:03:10.090 00:03:10.090 regex: 00:03:10.090 00:03:10.090 vdpa: 00:03:10.090 00:03:10.090 event: 00:03:10.090 00:03:10.090 baseband: 00:03:10.090 00:03:10.090 gpu: 00:03:10.090 00:03:10.090 00:03:10.090 Message: 00:03:10.090 ================= 00:03:10.090 Content Skipped 00:03:10.090 ================= 00:03:10.090 00:03:10.090 apps: 00:03:10.090 00:03:10.090 libs: 00:03:10.090 kni: explicitly disabled via build config (deprecated lib) 00:03:10.090 flow_classify: explicitly disabled via build config (deprecated lib) 00:03:10.090 00:03:10.090 drivers: 00:03:10.090 common/cpt: not in enabled drivers build config 00:03:10.090 common/dpaax: not in enabled drivers build config 00:03:10.090 common/iavf: not in enabled drivers build config 00:03:10.090 common/idpf: not in enabled drivers build config 00:03:10.090 common/mvep: not in enabled drivers build config 00:03:10.090 common/octeontx: not in enabled drivers build config 00:03:10.090 bus/auxiliary: not in enabled drivers build config 00:03:10.090 bus/dpaa: not in enabled drivers build config 00:03:10.090 bus/fslmc: not in enabled drivers build config 00:03:10.090 bus/ifpga: not in enabled drivers build config 00:03:10.090 bus/vmbus: not in enabled drivers build config 00:03:10.090 common/cnxk: not in enabled drivers build config 00:03:10.090 common/mlx5: not in enabled drivers build config 00:03:10.090 common/qat: not in enabled drivers build config 00:03:10.090 common/sfc_efx: not in enabled drivers build config 00:03:10.090 mempool/bucket: not in enabled drivers build config 00:03:10.090 mempool/cnxk: not in enabled drivers build config 00:03:10.090 mempool/dpaa: not in enabled drivers build config 00:03:10.090 mempool/dpaa2: not in enabled drivers build config 00:03:10.090 mempool/octeontx: not in enabled drivers build config 00:03:10.090 mempool/stack: not in enabled drivers build config 00:03:10.090 dma/cnxk: not in enabled drivers build config 00:03:10.090 dma/dpaa: not in enabled drivers build config 00:03:10.090 dma/dpaa2: not in enabled drivers build config 00:03:10.090 dma/hisilicon: not in enabled drivers build config 00:03:10.090 dma/idxd: not in enabled drivers build config 00:03:10.090 dma/ioat: not in enabled drivers build config 00:03:10.090 dma/skeleton: not in enabled drivers build config 00:03:10.090 net/af_packet: not in enabled drivers build config 00:03:10.090 net/af_xdp: not in enabled drivers build config 00:03:10.090 net/ark: not in enabled drivers build config 00:03:10.090 net/atlantic: not in enabled drivers build config 00:03:10.090 net/avp: not in enabled drivers build config 00:03:10.090 net/axgbe: not in enabled drivers build config 00:03:10.090 net/bnx2x: not in enabled drivers build config 00:03:10.090 net/bnxt: not in enabled drivers build config 00:03:10.090 net/bonding: not in enabled drivers build config 00:03:10.090 net/cnxk: not in enabled drivers build config 00:03:10.090 net/cxgbe: not in enabled drivers build config 00:03:10.090 net/dpaa: not in enabled drivers build config 00:03:10.090 net/dpaa2: not in enabled drivers build config 00:03:10.090 net/e1000: not in enabled drivers build config 00:03:10.090 net/ena: not in enabled drivers build config 00:03:10.090 net/enetc: not in enabled drivers build config 00:03:10.090 net/enetfec: not in enabled drivers build config 00:03:10.090 net/enic: not in enabled drivers build config 00:03:10.090 net/failsafe: not in enabled drivers build config 00:03:10.090 net/fm10k: not in enabled drivers build config 00:03:10.090 net/gve: not in enabled drivers build config 00:03:10.090 net/hinic: not in enabled drivers build config 00:03:10.090 net/hns3: not in enabled drivers build config 00:03:10.090 net/iavf: not in enabled drivers build config 00:03:10.090 net/ice: not in enabled drivers build config 00:03:10.090 net/idpf: not in enabled drivers build config 00:03:10.090 net/igc: not in enabled drivers build config 00:03:10.090 net/ionic: not in enabled drivers build config 00:03:10.090 net/ipn3ke: not in enabled drivers build config 00:03:10.090 net/ixgbe: not in enabled drivers build config 00:03:10.090 net/kni: not in enabled drivers build config 00:03:10.090 net/liquidio: not in enabled drivers build config 00:03:10.090 net/mana: not in enabled drivers build config 00:03:10.090 net/memif: not in enabled drivers build config 00:03:10.090 net/mlx4: not in enabled drivers build config 00:03:10.090 net/mlx5: not in enabled drivers build config 00:03:10.090 net/mvneta: not in enabled drivers build config 00:03:10.090 net/mvpp2: not in enabled drivers build config 00:03:10.091 net/netvsc: not in enabled drivers build config 00:03:10.091 net/nfb: not in enabled drivers build config 00:03:10.091 net/nfp: not in enabled drivers build config 00:03:10.091 net/ngbe: not in enabled drivers build config 00:03:10.091 net/null: not in enabled drivers build config 00:03:10.091 net/octeontx: not in enabled drivers build config 00:03:10.091 net/octeon_ep: not in enabled drivers build config 00:03:10.091 net/pcap: not in enabled drivers build config 00:03:10.091 net/pfe: not in enabled drivers build config 00:03:10.091 net/qede: not in enabled drivers build config 00:03:10.091 net/ring: not in enabled drivers build config 00:03:10.091 net/sfc: not in enabled drivers build config 00:03:10.091 net/softnic: not in enabled drivers build config 00:03:10.091 net/tap: not in enabled drivers build config 00:03:10.091 net/thunderx: not in enabled drivers build config 00:03:10.091 net/txgbe: not in enabled drivers build config 00:03:10.091 net/vdev_netvsc: not in enabled drivers build config 00:03:10.091 net/vhost: not in enabled drivers build config 00:03:10.091 net/virtio: not in enabled drivers build config 00:03:10.091 net/vmxnet3: not in enabled drivers build config 00:03:10.091 raw/cnxk_bphy: not in enabled drivers build config 00:03:10.091 raw/cnxk_gpio: not in enabled drivers build config 00:03:10.091 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:10.091 raw/ifpga: not in enabled drivers build config 00:03:10.091 raw/ntb: not in enabled drivers build config 00:03:10.091 raw/skeleton: not in enabled drivers build config 00:03:10.091 crypto/armv8: not in enabled drivers build config 00:03:10.091 crypto/bcmfs: not in enabled drivers build config 00:03:10.091 crypto/caam_jr: not in enabled drivers build config 00:03:10.091 crypto/ccp: not in enabled drivers build config 00:03:10.091 crypto/cnxk: not in enabled drivers build config 00:03:10.091 crypto/dpaa_sec: not in enabled drivers build config 00:03:10.091 crypto/dpaa2_sec: not in enabled drivers build config 00:03:10.091 crypto/ipsec_mb: not in enabled drivers build config 00:03:10.091 crypto/mlx5: not in enabled drivers build config 00:03:10.091 crypto/mvsam: not in enabled drivers build config 00:03:10.091 crypto/nitrox: not in enabled drivers build config 00:03:10.091 crypto/null: not in enabled drivers build config 00:03:10.091 crypto/octeontx: not in enabled drivers build config 00:03:10.091 crypto/openssl: not in enabled drivers build config 00:03:10.091 crypto/scheduler: not in enabled drivers build config 00:03:10.091 crypto/uadk: not in enabled drivers build config 00:03:10.091 crypto/virtio: not in enabled drivers build config 00:03:10.091 compress/isal: not in enabled drivers build config 00:03:10.091 compress/mlx5: not in enabled drivers build config 00:03:10.091 compress/octeontx: not in enabled drivers build config 00:03:10.091 compress/zlib: not in enabled drivers build config 00:03:10.091 regex/mlx5: not in enabled drivers build config 00:03:10.091 regex/cn9k: not in enabled drivers build config 00:03:10.091 vdpa/ifc: not in enabled drivers build config 00:03:10.091 vdpa/mlx5: not in enabled drivers build config 00:03:10.091 vdpa/sfc: not in enabled drivers build config 00:03:10.091 event/cnxk: not in enabled drivers build config 00:03:10.091 event/dlb2: not in enabled drivers build config 00:03:10.091 event/dpaa: not in enabled drivers build config 00:03:10.091 event/dpaa2: not in enabled drivers build config 00:03:10.091 event/dsw: not in enabled drivers build config 00:03:10.091 event/opdl: not in enabled drivers build config 00:03:10.091 event/skeleton: not in enabled drivers build config 00:03:10.091 event/sw: not in enabled drivers build config 00:03:10.091 event/octeontx: not in enabled drivers build config 00:03:10.091 baseband/acc: not in enabled drivers build config 00:03:10.091 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:10.091 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:10.091 baseband/la12xx: not in enabled drivers build config 00:03:10.091 baseband/null: not in enabled drivers build config 00:03:10.091 baseband/turbo_sw: not in enabled drivers build config 00:03:10.091 gpu/cuda: not in enabled drivers build config 00:03:10.091 00:03:10.091 00:03:10.091 Build targets in project: 309 00:03:10.091 00:03:10.091 DPDK 22.11.4 00:03:10.091 00:03:10.091 User defined options 00:03:10.091 libdir : lib 00:03:10.091 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:10.091 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:10.091 c_link_args : 00:03:10.091 enable_docs : false 00:03:10.091 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:10.091 enable_kmods : false 00:03:10.091 machine : native 00:03:10.091 tests : false 00:03:10.091 00:03:10.091 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:10.091 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:10.091 10:08:49 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:10.348 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:10.348 [1/738] Generating lib/rte_kvargs_def with a custom command 00:03:10.348 [2/738] Generating lib/rte_telemetry_def with a custom command 00:03:10.348 [3/738] Generating lib/rte_telemetry_mingw with a custom command 00:03:10.348 [4/738] Generating lib/rte_kvargs_mingw with a custom command 00:03:10.348 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:10.348 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:10.349 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:10.349 [8/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:10.349 [9/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:10.349 [10/738] Linking static target lib/librte_kvargs.a 00:03:10.349 [11/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:10.349 [12/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:10.349 [13/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:10.349 [14/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:10.349 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:10.349 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:10.349 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:10.606 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:10.606 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:10.606 [20/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.606 [21/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:03:10.606 [22/738] Linking target lib/librte_kvargs.so.23.0 00:03:10.606 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:10.606 [24/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:10.606 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:10.606 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:10.606 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:10.606 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:10.606 [29/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:10.606 [30/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:10.606 [31/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:10.606 [32/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:10.864 [33/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:10.864 [34/738] Linking static target lib/librte_telemetry.a 00:03:10.864 [35/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:10.864 [36/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:10.864 [37/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:10.864 [38/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:10.864 [39/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:10.864 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:10.864 [41/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:10.864 [42/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:10.864 [43/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:10.864 [44/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.864 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:11.122 [46/738] Linking target lib/librte_telemetry.so.23.0 00:03:11.122 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:11.122 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:11.122 [49/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:11.122 [50/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:11.122 [51/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:11.122 [52/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:11.122 [53/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:11.122 [54/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:11.122 [55/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:11.122 [56/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:11.122 [57/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:11.122 [58/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:11.122 [59/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:11.122 [60/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:11.122 [61/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:11.122 [62/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:11.122 [63/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:11.122 [64/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:11.122 [65/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:11.122 [66/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:03:11.122 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:11.380 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:11.380 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:11.380 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:11.380 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:11.380 [72/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:11.380 [73/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:11.380 [74/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:11.380 [75/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:11.380 [76/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:11.380 [77/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:11.380 [78/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:11.380 [79/738] Generating lib/rte_eal_def with a custom command 00:03:11.380 [80/738] Generating lib/rte_eal_mingw with a custom command 00:03:11.380 [81/738] Generating lib/rte_ring_def with a custom command 00:03:11.380 [82/738] Generating lib/rte_ring_mingw with a custom command 00:03:11.380 [83/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:11.380 [84/738] Generating lib/rte_rcu_def with a custom command 00:03:11.380 [85/738] Generating lib/rte_rcu_mingw with a custom command 00:03:11.380 [86/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:11.638 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:11.638 [88/738] Linking static target lib/librte_ring.a 00:03:11.638 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:11.638 [90/738] Generating lib/rte_mempool_def with a custom command 00:03:11.638 [91/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:11.638 [92/738] Generating lib/rte_mempool_mingw with a custom command 00:03:11.638 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:11.638 [94/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.638 [95/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:11.638 [96/738] Linking static target lib/librte_eal.a 00:03:11.897 [97/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:11.897 [98/738] Generating lib/rte_mbuf_def with a custom command 00:03:11.897 [99/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:11.897 [100/738] Generating lib/rte_mbuf_mingw with a custom command 00:03:11.897 [101/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:11.897 [102/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:11.897 [103/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:11.897 [104/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:12.156 [105/738] Linking static target lib/librte_mempool.a 00:03:12.156 [106/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:12.156 [107/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:12.156 [108/738] Linking static target lib/librte_rcu.a 00:03:12.156 [109/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:12.156 [110/738] Generating lib/rte_net_def with a custom command 00:03:12.156 [111/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:12.156 [112/738] Generating lib/rte_net_mingw with a custom command 00:03:12.156 [113/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:12.414 [114/738] Generating lib/rte_meter_def with a custom command 00:03:12.414 [115/738] Generating lib/rte_meter_mingw with a custom command 00:03:12.414 [116/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:12.414 [117/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:12.414 [118/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.414 [119/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:12.414 [120/738] Linking static target lib/librte_meter.a 00:03:12.414 [121/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:12.414 [122/738] Linking static target lib/librte_net.a 00:03:12.414 [123/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:12.414 [124/738] Linking static target lib/librte_mbuf.a 00:03:12.414 [125/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.673 [126/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.673 [127/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.673 [128/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:12.673 [129/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:12.673 [130/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:12.673 [131/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:12.673 [132/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:12.931 [133/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.931 [134/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:12.931 [135/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:12.931 [136/738] Generating lib/rte_ethdev_def with a custom command 00:03:12.931 [137/738] Generating lib/rte_ethdev_mingw with a custom command 00:03:13.189 [138/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:13.189 [139/738] Generating lib/rte_pci_def with a custom command 00:03:13.189 [140/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:13.189 [141/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:13.189 [142/738] Generating lib/rte_pci_mingw with a custom command 00:03:13.189 [143/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:13.189 [144/738] Linking static target lib/librte_pci.a 00:03:13.189 [145/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:13.189 [146/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:13.189 [147/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:13.189 [148/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:13.189 [149/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.446 [150/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:13.446 [151/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:13.446 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:13.446 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:13.446 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:13.446 [155/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:13.446 [156/738] Generating lib/rte_cmdline_def with a custom command 00:03:13.446 [157/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:13.446 [158/738] Generating lib/rte_cmdline_mingw with a custom command 00:03:13.446 [159/738] Generating lib/rte_metrics_def with a custom command 00:03:13.446 [160/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:13.446 [161/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:13.446 [162/738] Generating lib/rte_metrics_mingw with a custom command 00:03:13.446 [163/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:13.446 [164/738] Generating lib/rte_hash_def with a custom command 00:03:13.446 [165/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:13.446 [166/738] Generating lib/rte_hash_mingw with a custom command 00:03:13.446 [167/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:13.446 [168/738] Generating lib/rte_timer_def with a custom command 00:03:13.704 [169/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:13.705 [170/738] Generating lib/rte_timer_mingw with a custom command 00:03:13.705 [171/738] Linking static target lib/librte_cmdline.a 00:03:13.705 [172/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:13.705 [173/738] Linking static target lib/librte_metrics.a 00:03:13.705 [174/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:13.962 [175/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:13.962 [176/738] Linking static target lib/librte_timer.a 00:03:13.962 [177/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.220 [178/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:14.220 [179/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:14.220 [180/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:14.220 [181/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.220 [182/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.478 [183/738] Generating lib/rte_acl_def with a custom command 00:03:14.478 [184/738] Generating lib/rte_acl_mingw with a custom command 00:03:14.478 [185/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:14.478 [186/738] Linking static target lib/librte_ethdev.a 00:03:14.478 [187/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:14.478 [188/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:14.478 [189/738] Generating lib/rte_bbdev_def with a custom command 00:03:14.478 [190/738] Generating lib/rte_bbdev_mingw with a custom command 00:03:14.478 [191/738] Generating lib/rte_bitratestats_def with a custom command 00:03:14.478 [192/738] Generating lib/rte_bitratestats_mingw with a custom command 00:03:14.736 [193/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:14.736 [194/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:14.994 [195/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:14.994 [196/738] Linking static target lib/librte_bitratestats.a 00:03:14.994 [197/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:14.994 [198/738] Linking static target lib/librte_bbdev.a 00:03:14.994 [199/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:14.994 [200/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.251 [201/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:15.251 [202/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:15.251 [203/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:15.251 [204/738] Linking static target lib/librte_hash.a 00:03:15.251 [205/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.508 [206/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:15.508 [207/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:15.771 [208/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:15.771 [209/738] Generating lib/rte_bpf_def with a custom command 00:03:15.771 [210/738] Generating lib/rte_bpf_mingw with a custom command 00:03:15.771 [211/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:15.771 [212/738] Generating lib/rte_cfgfile_def with a custom command 00:03:15.771 [213/738] Generating lib/rte_cfgfile_mingw with a custom command 00:03:15.771 [214/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.771 [215/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:16.029 [216/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:16.029 [217/738] Linking static target lib/librte_cfgfile.a 00:03:16.029 [218/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:16.029 [219/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:16.029 [220/738] Generating lib/rte_compressdev_def with a custom command 00:03:16.029 [221/738] Generating lib/rte_compressdev_mingw with a custom command 00:03:16.029 [222/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:16.287 [223/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.287 [224/738] Generating lib/rte_cryptodev_def with a custom command 00:03:16.287 [225/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:16.287 [226/738] Generating lib/rte_cryptodev_mingw with a custom command 00:03:16.287 [227/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:16.287 [228/738] Linking static target lib/librte_compressdev.a 00:03:16.287 [229/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:16.287 [230/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:16.287 [231/738] Linking static target lib/librte_bpf.a 00:03:16.545 [232/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:16.545 [233/738] Generating lib/rte_distributor_def with a custom command 00:03:16.545 [234/738] Generating lib/rte_distributor_mingw with a custom command 00:03:16.545 [235/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.545 [236/738] Generating lib/rte_efd_def with a custom command 00:03:16.545 [237/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:16.545 [238/738] Generating lib/rte_efd_mingw with a custom command 00:03:16.545 [239/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:16.545 [240/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:16.804 [241/738] Linking static target lib/librte_acl.a 00:03:16.804 [242/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:16.804 [243/738] Linking static target lib/librte_distributor.a 00:03:16.804 [244/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:16.804 [245/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.804 [246/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.062 [247/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:17.062 [248/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.062 [249/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.062 [250/738] Linking target lib/librte_eal.so.23.0 00:03:17.062 [251/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:17.062 [252/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:17.320 [253/738] Linking target lib/librte_ring.so.23.0 00:03:17.320 [254/738] Linking target lib/librte_meter.so.23.0 00:03:17.320 [255/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:17.320 [256/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:17.320 [257/738] Linking target lib/librte_rcu.so.23.0 00:03:17.320 [258/738] Linking target lib/librte_mempool.so.23.0 00:03:17.320 [259/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:17.320 [260/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:17.579 [261/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:17.579 [262/738] Linking target lib/librte_mbuf.so.23.0 00:03:17.579 [263/738] Linking target lib/librte_pci.so.23.0 00:03:17.579 [264/738] Linking target lib/librte_timer.so.23.0 00:03:17.579 [265/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:17.579 [266/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:17.579 [267/738] Linking target lib/librte_acl.so.23.0 00:03:17.579 [268/738] Linking target lib/librte_net.so.23.0 00:03:17.579 [269/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:17.579 [270/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:17.579 [271/738] Linking target lib/librte_cfgfile.so.23.0 00:03:17.579 [272/738] Linking target lib/librte_bbdev.so.23.0 00:03:17.579 [273/738] Linking target lib/librte_compressdev.so.23.0 00:03:17.579 [274/738] Linking static target lib/librte_efd.a 00:03:17.579 [275/738] Linking target lib/librte_distributor.so.23.0 00:03:17.579 [276/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:17.579 [277/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:17.579 [278/738] Linking static target lib/librte_cryptodev.a 00:03:17.579 [279/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:17.579 [280/738] Generating lib/rte_eventdev_def with a custom command 00:03:17.579 [281/738] Generating lib/rte_eventdev_mingw with a custom command 00:03:17.837 [282/738] Linking target lib/librte_hash.so.23.0 00:03:17.837 [283/738] Linking target lib/librte_cmdline.so.23.0 00:03:17.837 [284/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:17.837 [285/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:17.837 [286/738] Generating lib/rte_gpudev_def with a custom command 00:03:17.837 [287/738] Generating lib/rte_gpudev_mingw with a custom command 00:03:17.837 [288/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:17.837 [289/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.837 [290/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.837 [291/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:17.837 [292/738] Linking target lib/librte_efd.so.23.0 00:03:17.837 [293/738] Linking target lib/librte_ethdev.so.23.0 00:03:17.837 [294/738] Generating lib/rte_gro_def with a custom command 00:03:17.837 [295/738] Generating lib/rte_gro_mingw with a custom command 00:03:18.095 [296/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:18.096 [297/738] Linking target lib/librte_metrics.so.23.0 00:03:18.096 [298/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:18.096 [299/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:18.096 [300/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:18.096 [301/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:18.096 [302/738] Linking target lib/librte_bitratestats.so.23.0 00:03:18.096 [303/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:18.096 [304/738] Linking static target lib/librte_gpudev.a 00:03:18.096 [305/738] Linking target lib/librte_bpf.so.23.0 00:03:18.354 [306/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:18.354 [307/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:18.354 [308/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:18.354 [309/738] Linking static target lib/librte_gro.a 00:03:18.354 [310/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:18.354 [311/738] Generating lib/rte_gso_def with a custom command 00:03:18.354 [312/738] Generating lib/rte_gso_mingw with a custom command 00:03:18.354 [313/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:18.354 [314/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:18.354 [315/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.612 [316/738] Linking target lib/librte_gro.so.23.0 00:03:18.612 [317/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:18.612 [318/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:18.612 [319/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:18.612 [320/738] Linking static target lib/librte_gso.a 00:03:18.612 [321/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:18.612 [322/738] Linking static target lib/librte_eventdev.a 00:03:18.870 [323/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.870 [324/738] Linking target lib/librte_gso.so.23.0 00:03:18.870 [325/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.870 [326/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:18.870 [327/738] Generating lib/rte_ip_frag_def with a custom command 00:03:18.870 [328/738] Linking target lib/librte_gpudev.so.23.0 00:03:18.870 [329/738] Generating lib/rte_ip_frag_mingw with a custom command 00:03:18.870 [330/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:18.870 [331/738] Generating lib/rte_jobstats_def with a custom command 00:03:18.870 [332/738] Generating lib/rte_jobstats_mingw with a custom command 00:03:18.870 [333/738] Generating lib/rte_latencystats_def with a custom command 00:03:18.870 [334/738] Generating lib/rte_latencystats_mingw with a custom command 00:03:18.870 [335/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:18.870 [336/738] Generating lib/rte_lpm_def with a custom command 00:03:18.870 [337/738] Generating lib/rte_lpm_mingw with a custom command 00:03:18.870 [338/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:18.870 [339/738] Linking static target lib/librte_jobstats.a 00:03:18.870 [340/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:18.870 [341/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:18.870 [342/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:18.870 [343/738] Linking static target lib/librte_ip_frag.a 00:03:19.128 [344/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.128 [345/738] Linking target lib/librte_jobstats.so.23.0 00:03:19.128 [346/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.128 [347/738] Linking target lib/librte_ip_frag.so.23.0 00:03:19.128 [348/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:19.128 [349/738] Linking static target lib/librte_latencystats.a 00:03:19.386 [350/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:19.386 [351/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:19.386 [352/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.386 [353/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:19.386 [354/738] Generating lib/rte_member_def with a custom command 00:03:19.386 [355/738] Generating lib/rte_member_mingw with a custom command 00:03:19.386 [356/738] Linking target lib/librte_cryptodev.so.23.0 00:03:19.386 [357/738] Generating lib/rte_pcapng_def with a custom command 00:03:19.386 [358/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.386 [359/738] Generating lib/rte_pcapng_mingw with a custom command 00:03:19.386 [360/738] Linking target lib/librte_latencystats.so.23.0 00:03:19.386 [361/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:19.386 [362/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:19.644 [363/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:19.644 [364/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:19.644 [365/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:19.644 [366/738] Linking static target lib/librte_lpm.a 00:03:19.644 [367/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:19.644 [368/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:19.644 [369/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:19.644 [370/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:19.903 [371/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:19.903 [372/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:19.903 [373/738] Generating lib/rte_power_def with a custom command 00:03:19.903 [374/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.903 [375/738] Generating lib/rte_power_mingw with a custom command 00:03:19.903 [376/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:19.903 [377/738] Linking static target lib/librte_pcapng.a 00:03:19.903 [378/738] Generating lib/rte_rawdev_def with a custom command 00:03:19.903 [379/738] Linking target lib/librte_lpm.so.23.0 00:03:19.903 [380/738] Generating lib/rte_rawdev_mingw with a custom command 00:03:19.903 [381/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:19.903 [382/738] Generating lib/rte_regexdev_def with a custom command 00:03:19.903 [383/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:19.903 [384/738] Generating lib/rte_regexdev_mingw with a custom command 00:03:19.903 [385/738] Generating lib/rte_dmadev_def with a custom command 00:03:20.161 [386/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.161 [387/738] Generating lib/rte_dmadev_mingw with a custom command 00:03:20.161 [388/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:20.161 [389/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:20.161 [390/738] Linking target lib/librte_eventdev.so.23.0 00:03:20.161 [391/738] Generating lib/rte_rib_def with a custom command 00:03:20.161 [392/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.161 [393/738] Generating lib/rte_rib_mingw with a custom command 00:03:20.161 [394/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:20.161 [395/738] Linking static target lib/librte_rawdev.a 00:03:20.161 [396/738] Linking target lib/librte_pcapng.so.23.0 00:03:20.161 [397/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:20.161 [398/738] Linking static target lib/librte_power.a 00:03:20.161 [399/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:20.161 [400/738] Generating lib/rte_reorder_def with a custom command 00:03:20.161 [401/738] Generating lib/rte_reorder_mingw with a custom command 00:03:20.161 [402/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:20.420 [403/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:20.420 [404/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:20.420 [405/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:20.420 [406/738] Linking static target lib/librte_member.a 00:03:20.420 [407/738] Linking static target lib/librte_dmadev.a 00:03:20.420 [408/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:20.420 [409/738] Linking static target lib/librte_regexdev.a 00:03:20.420 [410/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:20.420 [411/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.420 [412/738] Linking target lib/librte_rawdev.so.23.0 00:03:20.420 [413/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:20.420 [414/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:20.420 [415/738] Generating lib/rte_sched_def with a custom command 00:03:20.420 [416/738] Generating lib/rte_sched_mingw with a custom command 00:03:20.420 [417/738] Generating lib/rte_security_def with a custom command 00:03:20.420 [418/738] Generating lib/rte_security_mingw with a custom command 00:03:20.678 [419/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:20.678 [420/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.678 [421/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:20.678 [422/738] Linking target lib/librte_member.so.23.0 00:03:20.678 [423/738] Generating lib/rte_stack_def with a custom command 00:03:20.678 [424/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:20.678 [425/738] Linking static target lib/librte_stack.a 00:03:20.678 [426/738] Generating lib/rte_stack_mingw with a custom command 00:03:20.678 [427/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:20.678 [428/738] Linking static target lib/librte_reorder.a 00:03:20.678 [429/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:20.678 [430/738] Linking static target lib/librte_rib.a 00:03:20.678 [431/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:20.678 [432/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.678 [433/738] Linking target lib/librte_dmadev.so.23.0 00:03:20.678 [434/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.678 [435/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.936 [436/738] Linking target lib/librte_stack.so.23.0 00:03:20.936 [437/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.936 [438/738] Linking target lib/librte_power.so.23.0 00:03:20.936 [439/738] Linking target lib/librte_reorder.so.23.0 00:03:20.936 [440/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:20.936 [441/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.936 [442/738] Linking target lib/librte_regexdev.so.23.0 00:03:20.936 [443/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:20.936 [444/738] Linking static target lib/librte_security.a 00:03:20.936 [445/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.936 [446/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:20.936 [447/738] Linking target lib/librte_rib.so.23.0 00:03:20.936 [448/738] Generating lib/rte_vhost_def with a custom command 00:03:20.936 [449/738] Generating lib/rte_vhost_mingw with a custom command 00:03:21.195 [450/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:21.195 [451/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:21.195 [452/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.195 [453/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:21.195 [454/738] Linking target lib/librte_security.so.23.0 00:03:21.454 [455/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:21.454 [456/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:21.454 [457/738] Linking static target lib/librte_sched.a 00:03:21.454 [458/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:21.454 [459/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:21.713 [460/738] Generating lib/rte_ipsec_def with a custom command 00:03:21.713 [461/738] Generating lib/rte_ipsec_mingw with a custom command 00:03:21.713 [462/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.713 [463/738] Linking target lib/librte_sched.so.23.0 00:03:21.713 [464/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:21.713 [465/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:21.972 [466/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:21.972 [467/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:21.972 [468/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:21.972 [469/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:21.972 [470/738] Generating lib/rte_fib_def with a custom command 00:03:21.972 [471/738] Generating lib/rte_fib_mingw with a custom command 00:03:21.972 [472/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:22.229 [473/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:22.229 [474/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:22.229 [475/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:22.229 [476/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:22.229 [477/738] Linking static target lib/librte_ipsec.a 00:03:22.522 [478/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:22.522 [479/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:22.522 [480/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:22.522 [481/738] Linking static target lib/librte_fib.a 00:03:22.522 [482/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:22.522 [483/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.522 [484/738] Linking target lib/librte_ipsec.so.23.0 00:03:22.780 [485/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:22.780 [486/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:22.780 [487/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:22.780 [488/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.780 [489/738] Linking target lib/librte_fib.so.23.0 00:03:23.039 [490/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:23.039 [491/738] Generating lib/rte_port_def with a custom command 00:03:23.039 [492/738] Generating lib/rte_port_mingw with a custom command 00:03:23.039 [493/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:23.039 [494/738] Generating lib/rte_pdump_def with a custom command 00:03:23.039 [495/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:23.039 [496/738] Generating lib/rte_pdump_mingw with a custom command 00:03:23.352 [497/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:23.352 [498/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:23.352 [499/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:23.352 [500/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:23.352 [501/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:23.352 [502/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:23.610 [503/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:23.610 [504/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:23.610 [505/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:23.610 [506/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:23.610 [507/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:23.610 [508/738] Linking static target lib/librte_port.a 00:03:23.610 [509/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:23.610 [510/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:23.869 [511/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:23.869 [512/738] Linking static target lib/librte_pdump.a 00:03:23.869 [513/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.128 [514/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.128 [515/738] Linking target lib/librte_port.so.23.0 00:03:24.128 [516/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:24.128 [517/738] Linking target lib/librte_pdump.so.23.0 00:03:24.129 [518/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:24.129 [519/738] Generating lib/rte_table_def with a custom command 00:03:24.129 [520/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:24.129 [521/738] Generating lib/rte_table_mingw with a custom command 00:03:24.129 [522/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:24.129 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:24.386 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:24.386 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:24.387 [526/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:24.387 [527/738] Generating lib/rte_pipeline_def with a custom command 00:03:24.387 [528/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:24.387 [529/738] Linking static target lib/librte_table.a 00:03:24.387 [530/738] Generating lib/rte_pipeline_mingw with a custom command 00:03:24.387 [531/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:24.645 [532/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:24.645 [533/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:24.645 [534/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:24.903 [535/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:24.903 [536/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.903 [537/738] Linking target lib/librte_table.so.23.0 00:03:24.903 [538/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:24.903 [539/738] Generating lib/rte_graph_def with a custom command 00:03:24.903 [540/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:24.903 [541/738] Generating lib/rte_graph_mingw with a custom command 00:03:24.903 [542/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:25.161 [543/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:25.161 [544/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:25.161 [545/738] Linking static target lib/librte_graph.a 00:03:25.161 [546/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:25.161 [547/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:25.419 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:25.419 [549/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:25.419 [550/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:25.419 [551/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:25.419 [552/738] Generating lib/rte_node_def with a custom command 00:03:25.419 [553/738] Generating lib/rte_node_mingw with a custom command 00:03:25.678 [554/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:25.678 [555/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:25.678 [556/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:25.678 [557/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:25.678 [558/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.678 [559/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:25.678 [560/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:25.678 [561/738] Linking target lib/librte_graph.so.23.0 00:03:25.678 [562/738] Generating drivers/rte_bus_pci_def with a custom command 00:03:25.936 [563/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:25.936 [564/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:25.936 [565/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:25.936 [566/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:25.936 [567/738] Generating drivers/rte_bus_vdev_def with a custom command 00:03:25.936 [568/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:25.936 [569/738] Generating drivers/rte_mempool_ring_def with a custom command 00:03:25.936 [570/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:25.936 [571/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:25.936 [572/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:25.936 [573/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:25.936 [574/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:25.936 [575/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:25.936 [576/738] Linking static target lib/librte_node.a 00:03:26.194 [577/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:26.194 [578/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.194 [579/738] Linking static target drivers/librte_bus_vdev.a 00:03:26.194 [580/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:26.194 [581/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:26.194 [582/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.194 [583/738] Linking target lib/librte_node.so.23.0 00:03:26.195 [584/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.453 [585/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.453 [586/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:26.453 [587/738] Linking target drivers/librte_bus_vdev.so.23.0 00:03:26.453 [588/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:26.453 [589/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:26.453 [590/738] Linking static target drivers/librte_bus_pci.a 00:03:26.453 [591/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:26.453 [592/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:26.453 [593/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:26.712 [594/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:26.712 [595/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.712 [596/738] Linking target drivers/librte_bus_pci.so.23.0 00:03:26.712 [597/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:26.712 [598/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:26.712 [599/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:26.712 [600/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:26.712 [601/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:26.712 [602/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:26.712 [603/738] Linking static target drivers/librte_mempool_ring.a 00:03:26.712 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:26.971 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:03:26.971 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:27.231 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:27.489 [608/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:27.489 [609/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:27.489 [610/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:27.747 [611/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:27.747 [612/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:28.007 [613/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:28.007 [614/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:28.007 [615/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:28.007 [616/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:28.007 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:03:28.265 [618/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:28.265 [619/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:28.833 [620/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:28.833 [621/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:28.833 [622/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:29.092 [623/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:29.092 [624/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:29.092 [625/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:29.352 [626/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:29.352 [627/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:29.352 [628/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:29.352 [629/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:29.352 [630/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:29.352 [631/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:29.611 [632/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:29.870 [633/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:29.870 [634/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:29.870 [635/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:29.870 [636/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:29.870 [637/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:30.128 [638/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:30.128 [639/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:30.128 [640/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:30.128 [641/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:30.128 [642/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:30.128 [643/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:30.128 [644/738] Linking static target drivers/librte_net_i40e.a 00:03:30.128 [645/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:30.401 [646/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:30.401 [647/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:30.401 [648/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:30.694 [649/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.694 [650/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:30.694 [651/738] Linking target drivers/librte_net_i40e.so.23.0 00:03:30.694 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:30.694 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:30.694 [654/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:30.694 [655/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:30.694 [656/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:30.953 [657/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:30.953 [658/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:30.953 [659/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:30.953 [660/738] Linking static target lib/librte_vhost.a 00:03:30.953 [661/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:30.953 [662/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:31.213 [663/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:31.213 [664/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:31.213 [665/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:31.213 [666/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:31.472 [667/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:31.732 [668/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.732 [669/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:31.732 [670/738] Linking target lib/librte_vhost.so.23.0 00:03:31.732 [671/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:31.991 [672/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:31.991 [673/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:31.991 [674/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:31.991 [675/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:31.991 [676/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:31.991 [677/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:32.250 [678/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:32.250 [679/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:32.250 [680/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:32.250 [681/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:32.250 [682/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:32.250 [683/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:32.508 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:32.509 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:32.509 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:32.509 [687/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:32.509 [688/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:32.767 [689/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:32.767 [690/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:33.025 [691/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:33.025 [692/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:33.025 [693/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:33.025 [694/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:33.283 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:33.283 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:33.543 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:33.543 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:33.543 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:33.802 [700/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:33.802 [701/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:33.802 [702/738] Linking static target lib/librte_pipeline.a 00:03:33.802 [703/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:33.802 [704/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:33.802 [705/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:34.061 [706/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:34.061 [707/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:34.061 [708/738] Linking target app/dpdk-dumpcap 00:03:34.061 [709/738] Linking target app/dpdk-pdump 00:03:34.061 [710/738] Linking target app/dpdk-proc-info 00:03:34.061 [711/738] Linking target app/dpdk-test-acl 00:03:34.320 [712/738] Linking target app/dpdk-test-cmdline 00:03:34.320 [713/738] Linking target app/dpdk-test-bbdev 00:03:34.320 [714/738] Linking target app/dpdk-test-compress-perf 00:03:34.320 [715/738] Linking target app/dpdk-test-crypto-perf 00:03:34.320 [716/738] Linking target app/dpdk-test-eventdev 00:03:34.577 [717/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:34.577 [718/738] Linking target app/dpdk-test-fib 00:03:34.577 [719/738] Linking target app/dpdk-test-pipeline 00:03:34.577 [720/738] Linking target app/dpdk-test-flow-perf 00:03:34.577 [721/738] Linking target app/dpdk-test-gpudev 00:03:34.834 [722/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:34.834 [723/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:34.834 [724/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:35.093 [725/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:35.093 [726/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:35.093 [727/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:35.093 [728/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:35.352 [729/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:35.352 [730/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:35.352 [731/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:35.352 [732/738] Linking target app/dpdk-test-sad 00:03:35.610 [733/738] Linking target app/dpdk-test-regex 00:03:35.610 [734/738] Linking target app/dpdk-testpmd 00:03:35.610 [735/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:35.869 [736/738] Linking target app/dpdk-test-security-perf 00:03:36.434 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.434 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:36.434 10:09:15 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:36.434 10:09:15 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:36.434 10:09:15 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:36.434 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:36.434 [0/1] Installing files. 00:03:36.694 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.694 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:36.695 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.696 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.957 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.957 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:36.957 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:36.957 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:36.957 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:36.957 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:36.958 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:36.958 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.958 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:36.959 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:36.959 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:36.959 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.959 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:36.959 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.959 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.960 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.961 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:36.962 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:36.962 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:36.962 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:36.962 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:36.962 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:36.962 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:36.962 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:36.962 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:36.962 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:36.962 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:36.962 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:36.962 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:36.962 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:36.962 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:36.962 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:36.962 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:36.962 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:36.962 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:36.962 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:36.962 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:36.962 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:36.962 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:36.962 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:36.962 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:36.962 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:36.962 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:36.962 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:36.962 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:36.962 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:36.962 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:36.962 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:36.962 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:36.962 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:36.962 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:36.962 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:36.962 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:36.962 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:36.962 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:36.962 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:36.962 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:36.962 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:36.962 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:36.962 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:36.962 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:36.962 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:36.962 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:36.962 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:36.962 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:36.962 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:36.962 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:36.962 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:36.962 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:36.962 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:36.962 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:36.962 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:36.962 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:36.962 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:36.963 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:36.963 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:36.963 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:36.963 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:36.963 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:36.963 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:36.963 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:36.963 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:36.963 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:36.963 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:36.963 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:36.963 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:36.963 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:36.963 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:36.963 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:36.963 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:36.963 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:36.963 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:36.963 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:36.963 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:36.963 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:36.963 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:36.963 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:36.963 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:36.963 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:36.963 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:36.963 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:36.963 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:36.963 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:36.963 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:36.963 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:36.963 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:36.963 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:36.963 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:36.963 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:36.963 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:36.963 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:36.963 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:36.963 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:36.963 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:36.963 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:36.963 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:36.963 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:36.963 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:36.963 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:36.963 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:36.963 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:36.963 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:36.963 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:36.963 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:36.963 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:36.963 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:36.963 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:36.963 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:36.963 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:36.963 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:36.963 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:36.963 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:36.963 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:36.963 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:36.963 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:36.963 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:36.963 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:36.963 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:36.963 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:36.963 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:36.963 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:36.963 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:36.963 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:37.220 10:09:16 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:37.220 10:09:16 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:37.220 00:03:37.220 real 0m32.086s 00:03:37.220 user 3m36.026s 00:03:37.220 sys 0m33.456s 00:03:37.220 10:09:16 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:37.220 10:09:16 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:37.220 ************************************ 00:03:37.220 END TEST build_native_dpdk 00:03:37.220 ************************************ 00:03:37.220 10:09:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:37.220 10:09:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:37.220 10:09:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:37.220 10:09:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:37.220 10:09:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:37.220 10:09:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:37.220 10:09:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:37.220 10:09:16 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:37.220 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:37.220 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.220 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:37.220 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:37.478 Using 'verbs' RDMA provider 00:03:48.870 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:01.098 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:01.098 Creating mk/config.mk...done. 00:04:01.098 Creating mk/cc.flags.mk...done. 00:04:01.098 Type 'make' to build. 00:04:01.098 10:09:39 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:01.098 10:09:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:01.098 10:09:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:01.098 10:09:39 -- common/autotest_common.sh@10 -- $ set +x 00:04:01.098 ************************************ 00:04:01.098 START TEST make 00:04:01.098 ************************************ 00:04:01.098 10:09:39 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:01.098 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:01.098 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:01.098 meson setup builddir \ 00:04:01.098 -Dwith-libaio=enabled \ 00:04:01.098 -Dwith-liburing=enabled \ 00:04:01.098 -Dwith-libvfn=disabled \ 00:04:01.098 -Dwith-spdk=disabled \ 00:04:01.098 -Dexamples=false \ 00:04:01.098 -Dtests=false \ 00:04:01.098 -Dtools=false && \ 00:04:01.098 meson compile -C builddir && \ 00:04:01.098 cd -) 00:04:01.098 make[1]: Nothing to be done for 'all'. 00:04:01.666 The Meson build system 00:04:01.666 Version: 1.5.0 00:04:01.666 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:01.666 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:01.666 Build type: native build 00:04:01.666 Project name: xnvme 00:04:01.667 Project version: 0.7.5 00:04:01.667 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:01.667 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:01.667 Host machine cpu family: x86_64 00:04:01.667 Host machine cpu: x86_64 00:04:01.667 Message: host_machine.system: linux 00:04:01.667 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:01.667 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:01.667 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:01.667 Run-time dependency threads found: YES 00:04:01.667 Has header "setupapi.h" : NO 00:04:01.667 Has header "linux/blkzoned.h" : YES 00:04:01.667 Has header "linux/blkzoned.h" : YES (cached) 00:04:01.667 Has header "libaio.h" : YES 00:04:01.667 Library aio found: YES 00:04:01.667 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:01.667 Run-time dependency liburing found: YES 2.2 00:04:01.667 Dependency libvfn skipped: feature with-libvfn disabled 00:04:01.667 Found CMake: /usr/bin/cmake (3.27.7) 00:04:01.667 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:04:01.667 Subproject spdk : skipped: feature with-spdk disabled 00:04:01.667 Run-time dependency appleframeworks found: NO (tried framework) 00:04:01.667 Run-time dependency appleframeworks found: NO (tried framework) 00:04:01.667 Library rt found: YES 00:04:01.667 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:01.667 Configuring xnvme_config.h using configuration 00:04:01.667 Configuring xnvme.spec using configuration 00:04:01.667 Run-time dependency bash-completion found: YES 2.11 00:04:01.667 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:01.667 Program cp found: YES (/usr/bin/cp) 00:04:01.667 Build targets in project: 3 00:04:01.667 00:04:01.667 xnvme 0.7.5 00:04:01.667 00:04:01.667 Subprojects 00:04:01.667 spdk : NO Feature 'with-spdk' disabled 00:04:01.667 00:04:01.667 User defined options 00:04:01.667 examples : false 00:04:01.667 tests : false 00:04:01.667 tools : false 00:04:01.667 with-libaio : enabled 00:04:01.667 with-liburing: enabled 00:04:01.667 with-libvfn : disabled 00:04:01.667 with-spdk : disabled 00:04:01.667 00:04:01.667 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:02.232 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:02.232 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:04:02.232 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:04:02.232 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:04:02.232 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:04:02.232 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:04:02.232 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:04:02.232 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:04:02.232 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:04:02.232 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:04:02.232 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:04:02.232 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:04:02.232 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:04:02.491 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:04:02.491 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:04:02.491 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:04:02.491 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:04:02.491 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:04:02.491 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:04:02.491 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:04:02.491 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:04:02.491 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:04:02.491 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:04:02.491 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:04:02.491 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:04:02.491 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:04:02.491 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:04:02.491 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:04:02.491 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:04:02.491 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:04:02.491 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:04:02.491 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:04:02.491 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:04:02.491 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:04:02.491 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:04:02.491 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:04:02.491 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:04:02.491 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:04:02.491 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:04:02.491 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:04:02.491 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:04:02.491 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:04:02.491 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:04:02.491 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:04:02.491 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:04:02.751 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:04:02.751 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:04:02.751 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:04:02.751 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:04:02.751 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:04:02.751 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:04:02.751 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:04:02.751 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:04:02.751 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:04:02.751 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:04:02.751 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:04:02.751 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:04:02.751 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:04:02.751 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:04:02.751 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:04:02.751 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:04:02.751 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:04:02.751 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:04:02.751 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:04:02.751 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:04:02.751 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:04:02.751 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:04:02.751 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:04:02.751 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:04:02.751 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:04:02.751 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:04:03.011 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:04:03.011 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:04:03.011 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:04:03.270 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:04:03.270 [75/76] Linking static target lib/libxnvme.a 00:04:03.270 [76/76] Linking target lib/libxnvme.so.0.7.5 00:04:03.270 INFO: autodetecting backend as ninja 00:04:03.270 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:03.270 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:41.983 CC lib/ut_mock/mock.o 00:04:41.983 CC lib/ut/ut.o 00:04:41.983 CC lib/log/log.o 00:04:41.983 CC lib/log/log_flags.o 00:04:41.983 CC lib/log/log_deprecated.o 00:04:41.983 LIB libspdk_ut.a 00:04:41.983 LIB libspdk_ut_mock.a 00:04:41.983 SO libspdk_ut.so.2.0 00:04:41.983 SO libspdk_ut_mock.so.6.0 00:04:41.983 LIB libspdk_log.a 00:04:41.983 SO libspdk_log.so.7.1 00:04:41.983 SYMLINK libspdk_ut.so 00:04:41.983 SYMLINK libspdk_ut_mock.so 00:04:41.983 SYMLINK libspdk_log.so 00:04:41.983 CXX lib/trace_parser/trace.o 00:04:41.983 CC lib/dma/dma.o 00:04:41.983 CC lib/ioat/ioat.o 00:04:41.983 CC lib/util/base64.o 00:04:41.983 CC lib/util/bit_array.o 00:04:41.983 CC lib/util/crc16.o 00:04:41.983 CC lib/util/cpuset.o 00:04:41.983 CC lib/util/crc32c.o 00:04:41.983 CC lib/util/crc32.o 00:04:41.983 CC lib/vfio_user/host/vfio_user_pci.o 00:04:41.983 CC lib/util/crc32_ieee.o 00:04:41.983 CC lib/util/crc64.o 00:04:41.983 CC lib/util/dif.o 00:04:41.983 CC lib/util/fd.o 00:04:41.983 LIB libspdk_dma.a 00:04:41.983 CC lib/util/fd_group.o 00:04:41.983 SO libspdk_dma.so.5.0 00:04:41.983 CC lib/vfio_user/host/vfio_user.o 00:04:41.983 CC lib/util/file.o 00:04:41.983 CC lib/util/hexlify.o 00:04:41.983 SYMLINK libspdk_dma.so 00:04:41.983 CC lib/util/iov.o 00:04:41.983 LIB libspdk_ioat.a 00:04:41.983 CC lib/util/math.o 00:04:41.983 CC lib/util/net.o 00:04:41.983 SO libspdk_ioat.so.7.0 00:04:41.983 SYMLINK libspdk_ioat.so 00:04:41.983 CC lib/util/pipe.o 00:04:41.983 CC lib/util/strerror_tls.o 00:04:41.983 CC lib/util/string.o 00:04:41.983 CC lib/util/uuid.o 00:04:41.983 LIB libspdk_vfio_user.a 00:04:41.983 CC lib/util/xor.o 00:04:41.983 CC lib/util/zipf.o 00:04:41.983 CC lib/util/md5.o 00:04:41.983 SO libspdk_vfio_user.so.5.0 00:04:41.983 SYMLINK libspdk_vfio_user.so 00:04:41.983 LIB libspdk_util.a 00:04:41.983 LIB libspdk_trace_parser.a 00:04:41.983 SO libspdk_util.so.10.1 00:04:41.983 SO libspdk_trace_parser.so.6.0 00:04:41.983 SYMLINK libspdk_util.so 00:04:41.983 SYMLINK libspdk_trace_parser.so 00:04:41.983 CC lib/idxd/idxd.o 00:04:41.983 CC lib/idxd/idxd_user.o 00:04:41.983 CC lib/vmd/vmd.o 00:04:41.983 CC lib/vmd/led.o 00:04:41.983 CC lib/idxd/idxd_kernel.o 00:04:41.983 CC lib/rdma_utils/rdma_utils.o 00:04:41.983 CC lib/env_dpdk/env.o 00:04:41.983 CC lib/conf/conf.o 00:04:41.983 CC lib/env_dpdk/memory.o 00:04:41.983 CC lib/json/json_parse.o 00:04:41.983 CC lib/json/json_util.o 00:04:41.983 CC lib/json/json_write.o 00:04:41.983 LIB libspdk_conf.a 00:04:41.983 CC lib/env_dpdk/pci.o 00:04:41.983 CC lib/env_dpdk/init.o 00:04:41.983 SO libspdk_conf.so.6.0 00:04:41.983 LIB libspdk_rdma_utils.a 00:04:41.983 SO libspdk_rdma_utils.so.1.0 00:04:41.983 SYMLINK libspdk_conf.so 00:04:41.983 CC lib/env_dpdk/threads.o 00:04:41.983 SYMLINK libspdk_rdma_utils.so 00:04:41.983 CC lib/env_dpdk/pci_ioat.o 00:04:41.983 CC lib/env_dpdk/pci_virtio.o 00:04:41.983 LIB libspdk_json.a 00:04:41.983 CC lib/env_dpdk/pci_vmd.o 00:04:41.983 SO libspdk_json.so.6.0 00:04:41.983 CC lib/env_dpdk/pci_idxd.o 00:04:41.983 CC lib/rdma_provider/common.o 00:04:41.983 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:41.983 SYMLINK libspdk_json.so 00:04:41.983 CC lib/env_dpdk/pci_event.o 00:04:41.983 CC lib/env_dpdk/sigbus_handler.o 00:04:41.983 CC lib/env_dpdk/pci_dpdk.o 00:04:41.983 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:41.983 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:41.983 LIB libspdk_idxd.a 00:04:41.983 LIB libspdk_vmd.a 00:04:41.983 SO libspdk_idxd.so.12.1 00:04:41.983 SO libspdk_vmd.so.6.0 00:04:41.983 LIB libspdk_rdma_provider.a 00:04:41.983 SO libspdk_rdma_provider.so.7.0 00:04:41.983 SYMLINK libspdk_idxd.so 00:04:41.983 SYMLINK libspdk_vmd.so 00:04:41.983 SYMLINK libspdk_rdma_provider.so 00:04:41.983 CC lib/jsonrpc/jsonrpc_server.o 00:04:41.983 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:41.983 CC lib/jsonrpc/jsonrpc_client.o 00:04:41.983 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:41.983 LIB libspdk_jsonrpc.a 00:04:41.983 SO libspdk_jsonrpc.so.6.0 00:04:41.983 SYMLINK libspdk_jsonrpc.so 00:04:41.983 CC lib/rpc/rpc.o 00:04:41.983 LIB libspdk_env_dpdk.a 00:04:41.983 SO libspdk_env_dpdk.so.15.1 00:04:41.983 LIB libspdk_rpc.a 00:04:41.983 SYMLINK libspdk_env_dpdk.so 00:04:41.983 SO libspdk_rpc.so.6.0 00:04:41.983 SYMLINK libspdk_rpc.so 00:04:41.983 CC lib/trace/trace.o 00:04:41.983 CC lib/trace/trace_rpc.o 00:04:41.983 CC lib/trace/trace_flags.o 00:04:41.983 CC lib/keyring/keyring.o 00:04:41.983 CC lib/keyring/keyring_rpc.o 00:04:41.983 CC lib/notify/notify_rpc.o 00:04:41.983 CC lib/notify/notify.o 00:04:41.983 LIB libspdk_notify.a 00:04:41.983 SO libspdk_notify.so.6.0 00:04:41.983 LIB libspdk_keyring.a 00:04:41.983 SYMLINK libspdk_notify.so 00:04:41.983 LIB libspdk_trace.a 00:04:41.983 SO libspdk_keyring.so.2.0 00:04:41.983 SO libspdk_trace.so.11.0 00:04:41.983 SYMLINK libspdk_keyring.so 00:04:41.983 SYMLINK libspdk_trace.so 00:04:41.983 CC lib/thread/thread.o 00:04:41.983 CC lib/thread/iobuf.o 00:04:41.983 CC lib/sock/sock.o 00:04:41.983 CC lib/sock/sock_rpc.o 00:04:41.983 LIB libspdk_sock.a 00:04:41.983 SO libspdk_sock.so.10.0 00:04:41.983 SYMLINK libspdk_sock.so 00:04:41.983 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:41.983 CC lib/nvme/nvme_ns_cmd.o 00:04:41.983 CC lib/nvme/nvme_ctrlr.o 00:04:41.983 CC lib/nvme/nvme_fabric.o 00:04:41.983 CC lib/nvme/nvme_pcie_common.o 00:04:41.983 CC lib/nvme/nvme_ns.o 00:04:41.983 CC lib/nvme/nvme_pcie.o 00:04:41.983 CC lib/nvme/nvme_qpair.o 00:04:41.983 CC lib/nvme/nvme.o 00:04:41.983 CC lib/nvme/nvme_quirks.o 00:04:41.983 CC lib/nvme/nvme_transport.o 00:04:41.983 CC lib/nvme/nvme_discovery.o 00:04:41.983 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:41.983 LIB libspdk_thread.a 00:04:41.983 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:41.983 CC lib/nvme/nvme_tcp.o 00:04:41.983 SO libspdk_thread.so.11.0 00:04:41.983 CC lib/nvme/nvme_opal.o 00:04:41.983 SYMLINK libspdk_thread.so 00:04:41.983 CC lib/nvme/nvme_io_msg.o 00:04:41.983 CC lib/nvme/nvme_poll_group.o 00:04:41.983 CC lib/nvme/nvme_zns.o 00:04:41.983 CC lib/nvme/nvme_stubs.o 00:04:41.983 CC lib/nvme/nvme_auth.o 00:04:41.983 CC lib/nvme/nvme_cuse.o 00:04:41.983 CC lib/nvme/nvme_rdma.o 00:04:41.983 CC lib/accel/accel.o 00:04:41.983 CC lib/blob/blobstore.o 00:04:41.983 CC lib/init/json_config.o 00:04:41.983 CC lib/blob/request.o 00:04:41.983 CC lib/virtio/virtio.o 00:04:41.983 CC lib/init/subsystem.o 00:04:42.242 CC lib/init/subsystem_rpc.o 00:04:42.242 CC lib/virtio/virtio_vhost_user.o 00:04:42.242 CC lib/virtio/virtio_vfio_user.o 00:04:42.242 CC lib/blob/zeroes.o 00:04:42.242 CC lib/init/rpc.o 00:04:42.500 CC lib/blob/blob_bs_dev.o 00:04:42.500 LIB libspdk_init.a 00:04:42.500 CC lib/virtio/virtio_pci.o 00:04:42.500 SO libspdk_init.so.6.0 00:04:42.500 CC lib/accel/accel_rpc.o 00:04:42.500 SYMLINK libspdk_init.so 00:04:42.500 CC lib/accel/accel_sw.o 00:04:42.500 CC lib/fsdev/fsdev.o 00:04:42.500 CC lib/fsdev/fsdev_io.o 00:04:42.500 CC lib/fsdev/fsdev_rpc.o 00:04:42.759 LIB libspdk_virtio.a 00:04:42.759 CC lib/event/app.o 00:04:42.759 CC lib/event/reactor.o 00:04:42.759 SO libspdk_virtio.so.7.0 00:04:42.759 CC lib/event/log_rpc.o 00:04:42.759 SYMLINK libspdk_virtio.so 00:04:42.759 CC lib/event/app_rpc.o 00:04:42.759 CC lib/event/scheduler_static.o 00:04:42.759 LIB libspdk_accel.a 00:04:43.019 SO libspdk_accel.so.16.0 00:04:43.019 LIB libspdk_nvme.a 00:04:43.019 SYMLINK libspdk_accel.so 00:04:43.019 LIB libspdk_event.a 00:04:43.019 SO libspdk_nvme.so.15.0 00:04:43.019 SO libspdk_event.so.14.0 00:04:43.281 CC lib/bdev/bdev_rpc.o 00:04:43.281 CC lib/bdev/bdev.o 00:04:43.281 CC lib/bdev/bdev_zone.o 00:04:43.281 CC lib/bdev/part.o 00:04:43.281 CC lib/bdev/scsi_nvme.o 00:04:43.281 SYMLINK libspdk_event.so 00:04:43.281 LIB libspdk_fsdev.a 00:04:43.281 SO libspdk_fsdev.so.2.0 00:04:43.281 SYMLINK libspdk_nvme.so 00:04:43.281 SYMLINK libspdk_fsdev.so 00:04:43.543 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:44.116 LIB libspdk_fuse_dispatcher.a 00:04:44.116 SO libspdk_fuse_dispatcher.so.1.0 00:04:44.375 SYMLINK libspdk_fuse_dispatcher.so 00:04:44.945 LIB libspdk_blob.a 00:04:45.205 SO libspdk_blob.so.12.0 00:04:45.205 SYMLINK libspdk_blob.so 00:04:45.465 CC lib/lvol/lvol.o 00:04:45.465 CC lib/blobfs/blobfs.o 00:04:45.465 CC lib/blobfs/tree.o 00:04:46.031 LIB libspdk_bdev.a 00:04:46.031 SO libspdk_bdev.so.17.0 00:04:46.289 LIB libspdk_lvol.a 00:04:46.289 SYMLINK libspdk_bdev.so 00:04:46.289 SO libspdk_lvol.so.11.0 00:04:46.289 LIB libspdk_blobfs.a 00:04:46.289 SYMLINK libspdk_lvol.so 00:04:46.289 SO libspdk_blobfs.so.11.0 00:04:46.289 CC lib/ublk/ublk.o 00:04:46.289 CC lib/ublk/ublk_rpc.o 00:04:46.289 CC lib/nbd/nbd.o 00:04:46.289 CC lib/nbd/nbd_rpc.o 00:04:46.289 CC lib/scsi/dev.o 00:04:46.289 CC lib/scsi/port.o 00:04:46.289 CC lib/scsi/lun.o 00:04:46.289 CC lib/nvmf/ctrlr.o 00:04:46.289 SYMLINK libspdk_blobfs.so 00:04:46.289 CC lib/nvmf/ctrlr_discovery.o 00:04:46.289 CC lib/ftl/ftl_core.o 00:04:46.547 CC lib/nvmf/ctrlr_bdev.o 00:04:46.547 CC lib/nvmf/subsystem.o 00:04:46.547 CC lib/nvmf/nvmf.o 00:04:46.547 CC lib/scsi/scsi.o 00:04:46.547 CC lib/ftl/ftl_init.o 00:04:46.547 CC lib/nvmf/nvmf_rpc.o 00:04:46.547 CC lib/scsi/scsi_bdev.o 00:04:46.805 LIB libspdk_nbd.a 00:04:46.805 CC lib/ftl/ftl_layout.o 00:04:46.805 SO libspdk_nbd.so.7.0 00:04:46.805 SYMLINK libspdk_nbd.so 00:04:46.805 CC lib/ftl/ftl_debug.o 00:04:46.805 CC lib/ftl/ftl_io.o 00:04:47.064 LIB libspdk_ublk.a 00:04:47.064 SO libspdk_ublk.so.3.0 00:04:47.064 CC lib/nvmf/transport.o 00:04:47.064 CC lib/nvmf/tcp.o 00:04:47.064 CC lib/ftl/ftl_sb.o 00:04:47.064 SYMLINK libspdk_ublk.so 00:04:47.064 CC lib/nvmf/stubs.o 00:04:47.064 CC lib/nvmf/mdns_server.o 00:04:47.064 CC lib/scsi/scsi_pr.o 00:04:47.323 CC lib/ftl/ftl_l2p.o 00:04:47.323 CC lib/ftl/ftl_l2p_flat.o 00:04:47.323 CC lib/nvmf/rdma.o 00:04:47.323 CC lib/ftl/ftl_nv_cache.o 00:04:47.323 CC lib/nvmf/auth.o 00:04:47.323 CC lib/scsi/scsi_rpc.o 00:04:47.323 CC lib/scsi/task.o 00:04:47.584 CC lib/ftl/ftl_band.o 00:04:47.584 CC lib/ftl/ftl_band_ops.o 00:04:47.584 LIB libspdk_scsi.a 00:04:47.584 CC lib/ftl/ftl_writer.o 00:04:47.584 CC lib/ftl/ftl_rq.o 00:04:47.584 SO libspdk_scsi.so.9.0 00:04:47.846 SYMLINK libspdk_scsi.so 00:04:47.846 CC lib/ftl/ftl_reloc.o 00:04:47.846 CC lib/ftl/ftl_l2p_cache.o 00:04:47.846 CC lib/ftl/ftl_p2l.o 00:04:47.846 CC lib/ftl/ftl_p2l_log.o 00:04:47.846 CC lib/ftl/mngt/ftl_mngt.o 00:04:48.106 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:48.106 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:48.106 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:48.106 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:48.106 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:48.364 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:48.364 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:48.364 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:48.364 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:48.364 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:48.364 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:48.622 CC lib/iscsi/conn.o 00:04:48.622 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:48.622 CC lib/ftl/utils/ftl_conf.o 00:04:48.622 CC lib/vhost/vhost.o 00:04:48.622 CC lib/vhost/vhost_rpc.o 00:04:48.622 CC lib/iscsi/init_grp.o 00:04:48.622 CC lib/iscsi/iscsi.o 00:04:48.622 CC lib/vhost/vhost_scsi.o 00:04:48.622 CC lib/iscsi/param.o 00:04:48.622 CC lib/ftl/utils/ftl_md.o 00:04:48.902 CC lib/iscsi/portal_grp.o 00:04:48.902 CC lib/iscsi/tgt_node.o 00:04:48.902 CC lib/ftl/utils/ftl_mempool.o 00:04:49.161 CC lib/vhost/vhost_blk.o 00:04:49.161 CC lib/ftl/utils/ftl_bitmap.o 00:04:49.161 CC lib/ftl/utils/ftl_property.o 00:04:49.161 CC lib/vhost/rte_vhost_user.o 00:04:49.161 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:49.161 CC lib/iscsi/iscsi_subsystem.o 00:04:49.161 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:49.420 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:49.420 CC lib/iscsi/iscsi_rpc.o 00:04:49.420 LIB libspdk_nvmf.a 00:04:49.420 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:49.420 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:49.420 SO libspdk_nvmf.so.20.0 00:04:49.420 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:49.420 CC lib/iscsi/task.o 00:04:49.420 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:49.420 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:49.679 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:49.679 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:49.679 SYMLINK libspdk_nvmf.so 00:04:49.679 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:49.679 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:49.679 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:49.679 CC lib/ftl/base/ftl_base_dev.o 00:04:49.679 CC lib/ftl/base/ftl_base_bdev.o 00:04:49.679 CC lib/ftl/ftl_trace.o 00:04:49.937 LIB libspdk_ftl.a 00:04:49.937 LIB libspdk_iscsi.a 00:04:49.937 SO libspdk_iscsi.so.8.0 00:04:49.937 SO libspdk_ftl.so.9.0 00:04:50.195 LIB libspdk_vhost.a 00:04:50.195 SO libspdk_vhost.so.8.0 00:04:50.195 SYMLINK libspdk_iscsi.so 00:04:50.195 SYMLINK libspdk_vhost.so 00:04:50.195 SYMLINK libspdk_ftl.so 00:04:50.513 CC module/env_dpdk/env_dpdk_rpc.o 00:04:50.513 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:50.513 CC module/blob/bdev/blob_bdev.o 00:04:50.513 CC module/accel/ioat/accel_ioat.o 00:04:50.513 CC module/accel/dsa/accel_dsa.o 00:04:50.513 CC module/keyring/file/keyring.o 00:04:50.795 CC module/fsdev/aio/fsdev_aio.o 00:04:50.795 CC module/sock/posix/posix.o 00:04:50.795 CC module/accel/iaa/accel_iaa.o 00:04:50.795 CC module/accel/error/accel_error.o 00:04:50.795 LIB libspdk_env_dpdk_rpc.a 00:04:50.795 SO libspdk_env_dpdk_rpc.so.6.0 00:04:50.795 CC module/keyring/file/keyring_rpc.o 00:04:50.795 SYMLINK libspdk_env_dpdk_rpc.so 00:04:50.795 CC module/accel/iaa/accel_iaa_rpc.o 00:04:50.795 CC module/accel/ioat/accel_ioat_rpc.o 00:04:50.795 LIB libspdk_scheduler_dynamic.a 00:04:50.795 SO libspdk_scheduler_dynamic.so.4.0 00:04:50.795 CC module/accel/error/accel_error_rpc.o 00:04:50.795 LIB libspdk_keyring_file.a 00:04:50.795 SYMLINK libspdk_scheduler_dynamic.so 00:04:50.795 CC module/accel/dsa/accel_dsa_rpc.o 00:04:50.795 SO libspdk_keyring_file.so.2.0 00:04:50.795 LIB libspdk_accel_iaa.a 00:04:50.795 LIB libspdk_accel_ioat.a 00:04:50.795 SO libspdk_accel_iaa.so.3.0 00:04:50.795 LIB libspdk_blob_bdev.a 00:04:50.795 SO libspdk_accel_ioat.so.6.0 00:04:50.795 SYMLINK libspdk_keyring_file.so 00:04:50.795 LIB libspdk_accel_error.a 00:04:50.795 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:50.795 SO libspdk_blob_bdev.so.12.0 00:04:50.795 SYMLINK libspdk_accel_iaa.so 00:04:50.795 SO libspdk_accel_error.so.2.0 00:04:50.795 SYMLINK libspdk_accel_ioat.so 00:04:50.795 LIB libspdk_accel_dsa.a 00:04:51.054 CC module/fsdev/aio/linux_aio_mgr.o 00:04:51.054 SYMLINK libspdk_blob_bdev.so 00:04:51.054 SYMLINK libspdk_accel_error.so 00:04:51.054 SO libspdk_accel_dsa.so.5.0 00:04:51.054 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:51.054 CC module/keyring/linux/keyring.o 00:04:51.054 SYMLINK libspdk_accel_dsa.so 00:04:51.054 CC module/keyring/linux/keyring_rpc.o 00:04:51.054 CC module/scheduler/gscheduler/gscheduler.o 00:04:51.054 LIB libspdk_scheduler_dpdk_governor.a 00:04:51.054 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:51.054 LIB libspdk_keyring_linux.a 00:04:51.054 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:51.054 SO libspdk_keyring_linux.so.1.0 00:04:51.054 CC module/bdev/delay/vbdev_delay.o 00:04:51.054 CC module/blobfs/bdev/blobfs_bdev.o 00:04:51.054 CC module/bdev/error/vbdev_error.o 00:04:51.312 SYMLINK libspdk_keyring_linux.so 00:04:51.312 CC module/bdev/error/vbdev_error_rpc.o 00:04:51.312 LIB libspdk_scheduler_gscheduler.a 00:04:51.312 CC module/bdev/gpt/gpt.o 00:04:51.312 SO libspdk_scheduler_gscheduler.so.4.0 00:04:51.312 CC module/bdev/lvol/vbdev_lvol.o 00:04:51.312 CC module/bdev/malloc/bdev_malloc.o 00:04:51.312 SYMLINK libspdk_scheduler_gscheduler.so 00:04:51.312 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:51.312 LIB libspdk_sock_posix.a 00:04:51.312 LIB libspdk_fsdev_aio.a 00:04:51.312 SO libspdk_sock_posix.so.6.0 00:04:51.312 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:51.312 SO libspdk_fsdev_aio.so.1.0 00:04:51.312 CC module/bdev/gpt/vbdev_gpt.o 00:04:51.312 LIB libspdk_bdev_error.a 00:04:51.312 SYMLINK libspdk_sock_posix.so 00:04:51.312 SYMLINK libspdk_fsdev_aio.so 00:04:51.312 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:51.312 SO libspdk_bdev_error.so.6.0 00:04:51.571 CC module/bdev/null/bdev_null.o 00:04:51.571 LIB libspdk_blobfs_bdev.a 00:04:51.571 SYMLINK libspdk_bdev_error.so 00:04:51.571 SO libspdk_blobfs_bdev.so.6.0 00:04:51.571 LIB libspdk_bdev_delay.a 00:04:51.571 SO libspdk_bdev_delay.so.6.0 00:04:51.571 CC module/bdev/nvme/bdev_nvme.o 00:04:51.571 CC module/bdev/passthru/vbdev_passthru.o 00:04:51.571 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:51.571 SYMLINK libspdk_blobfs_bdev.so 00:04:51.571 CC module/bdev/null/bdev_null_rpc.o 00:04:51.571 SYMLINK libspdk_bdev_delay.so 00:04:51.571 LIB libspdk_bdev_gpt.a 00:04:51.571 CC module/bdev/raid/bdev_raid.o 00:04:51.571 LIB libspdk_bdev_malloc.a 00:04:51.571 SO libspdk_bdev_gpt.so.6.0 00:04:51.571 SO libspdk_bdev_malloc.so.6.0 00:04:51.571 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:51.571 CC module/bdev/raid/bdev_raid_rpc.o 00:04:51.571 SYMLINK libspdk_bdev_malloc.so 00:04:51.571 SYMLINK libspdk_bdev_gpt.so 00:04:51.571 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:51.829 CC module/bdev/split/vbdev_split.o 00:04:51.829 CC module/bdev/nvme/nvme_rpc.o 00:04:51.829 LIB libspdk_bdev_null.a 00:04:51.829 SO libspdk_bdev_null.so.6.0 00:04:51.829 SYMLINK libspdk_bdev_null.so 00:04:51.829 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:51.829 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:51.829 LIB libspdk_bdev_passthru.a 00:04:51.829 SO libspdk_bdev_passthru.so.6.0 00:04:51.829 SYMLINK libspdk_bdev_passthru.so 00:04:51.829 CC module/bdev/split/vbdev_split_rpc.o 00:04:51.830 CC module/bdev/xnvme/bdev_xnvme.o 00:04:52.088 CC module/bdev/raid/bdev_raid_sb.o 00:04:52.088 LIB libspdk_bdev_lvol.a 00:04:52.088 SO libspdk_bdev_lvol.so.6.0 00:04:52.088 CC module/bdev/aio/bdev_aio.o 00:04:52.088 LIB libspdk_bdev_split.a 00:04:52.088 LIB libspdk_bdev_zone_block.a 00:04:52.088 CC module/bdev/ftl/bdev_ftl.o 00:04:52.088 SO libspdk_bdev_split.so.6.0 00:04:52.088 SO libspdk_bdev_zone_block.so.6.0 00:04:52.088 SYMLINK libspdk_bdev_lvol.so 00:04:52.088 CC module/bdev/raid/raid0.o 00:04:52.088 SYMLINK libspdk_bdev_split.so 00:04:52.088 CC module/bdev/aio/bdev_aio_rpc.o 00:04:52.088 SYMLINK libspdk_bdev_zone_block.so 00:04:52.088 CC module/bdev/raid/raid1.o 00:04:52.088 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:52.347 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:52.347 CC module/bdev/raid/concat.o 00:04:52.347 LIB libspdk_bdev_aio.a 00:04:52.347 CC module/bdev/nvme/bdev_mdns_client.o 00:04:52.347 SO libspdk_bdev_aio.so.6.0 00:04:52.347 LIB libspdk_bdev_xnvme.a 00:04:52.347 SO libspdk_bdev_xnvme.so.3.0 00:04:52.347 CC module/bdev/iscsi/bdev_iscsi.o 00:04:52.347 SYMLINK libspdk_bdev_aio.so 00:04:52.347 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:52.347 SYMLINK libspdk_bdev_xnvme.so 00:04:52.347 CC module/bdev/nvme/vbdev_opal.o 00:04:52.347 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:52.347 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:52.347 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:52.347 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:52.606 LIB libspdk_bdev_ftl.a 00:04:52.606 SO libspdk_bdev_ftl.so.6.0 00:04:52.606 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:52.606 SYMLINK libspdk_bdev_ftl.so 00:04:52.606 LIB libspdk_bdev_raid.a 00:04:52.606 LIB libspdk_bdev_iscsi.a 00:04:52.606 SO libspdk_bdev_raid.so.6.0 00:04:52.606 SO libspdk_bdev_iscsi.so.6.0 00:04:52.866 SYMLINK libspdk_bdev_raid.so 00:04:52.866 SYMLINK libspdk_bdev_iscsi.so 00:04:52.866 LIB libspdk_bdev_virtio.a 00:04:52.866 SO libspdk_bdev_virtio.so.6.0 00:04:52.866 SYMLINK libspdk_bdev_virtio.so 00:04:54.249 LIB libspdk_bdev_nvme.a 00:04:54.249 SO libspdk_bdev_nvme.so.7.1 00:04:54.508 SYMLINK libspdk_bdev_nvme.so 00:04:54.766 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:54.766 CC module/event/subsystems/scheduler/scheduler.o 00:04:54.766 CC module/event/subsystems/iobuf/iobuf.o 00:04:54.766 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:54.767 CC module/event/subsystems/keyring/keyring.o 00:04:54.767 CC module/event/subsystems/fsdev/fsdev.o 00:04:54.767 CC module/event/subsystems/sock/sock.o 00:04:54.767 CC module/event/subsystems/vmd/vmd.o 00:04:54.767 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:54.767 LIB libspdk_event_keyring.a 00:04:54.767 LIB libspdk_event_vhost_blk.a 00:04:54.767 SO libspdk_event_keyring.so.1.0 00:04:54.767 LIB libspdk_event_scheduler.a 00:04:54.767 LIB libspdk_event_fsdev.a 00:04:55.024 LIB libspdk_event_iobuf.a 00:04:55.024 LIB libspdk_event_sock.a 00:04:55.024 SO libspdk_event_vhost_blk.so.3.0 00:04:55.024 SO libspdk_event_scheduler.so.4.0 00:04:55.024 SO libspdk_event_fsdev.so.1.0 00:04:55.024 LIB libspdk_event_vmd.a 00:04:55.024 SYMLINK libspdk_event_keyring.so 00:04:55.024 SO libspdk_event_sock.so.5.0 00:04:55.024 SO libspdk_event_iobuf.so.3.0 00:04:55.024 SO libspdk_event_vmd.so.6.0 00:04:55.024 SYMLINK libspdk_event_scheduler.so 00:04:55.024 SYMLINK libspdk_event_vhost_blk.so 00:04:55.024 SYMLINK libspdk_event_fsdev.so 00:04:55.024 SYMLINK libspdk_event_sock.so 00:04:55.024 SYMLINK libspdk_event_iobuf.so 00:04:55.024 SYMLINK libspdk_event_vmd.so 00:04:55.283 CC module/event/subsystems/accel/accel.o 00:04:55.283 LIB libspdk_event_accel.a 00:04:55.283 SO libspdk_event_accel.so.6.0 00:04:55.283 SYMLINK libspdk_event_accel.so 00:04:55.541 CC module/event/subsystems/bdev/bdev.o 00:04:55.799 LIB libspdk_event_bdev.a 00:04:55.799 SO libspdk_event_bdev.so.6.0 00:04:55.799 SYMLINK libspdk_event_bdev.so 00:04:56.056 CC module/event/subsystems/nbd/nbd.o 00:04:56.056 CC module/event/subsystems/scsi/scsi.o 00:04:56.056 CC module/event/subsystems/ublk/ublk.o 00:04:56.056 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:56.056 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:56.056 LIB libspdk_event_nbd.a 00:04:56.056 LIB libspdk_event_ublk.a 00:04:56.056 SO libspdk_event_nbd.so.6.0 00:04:56.056 LIB libspdk_event_scsi.a 00:04:56.056 SO libspdk_event_ublk.so.3.0 00:04:56.056 SO libspdk_event_scsi.so.6.0 00:04:56.314 SYMLINK libspdk_event_nbd.so 00:04:56.314 SYMLINK libspdk_event_ublk.so 00:04:56.314 SYMLINK libspdk_event_scsi.so 00:04:56.314 LIB libspdk_event_nvmf.a 00:04:56.314 SO libspdk_event_nvmf.so.6.0 00:04:56.314 SYMLINK libspdk_event_nvmf.so 00:04:56.314 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:56.314 CC module/event/subsystems/iscsi/iscsi.o 00:04:56.574 LIB libspdk_event_vhost_scsi.a 00:04:56.574 LIB libspdk_event_iscsi.a 00:04:56.574 SO libspdk_event_vhost_scsi.so.3.0 00:04:56.574 SO libspdk_event_iscsi.so.6.0 00:04:56.574 SYMLINK libspdk_event_vhost_scsi.so 00:04:56.574 SYMLINK libspdk_event_iscsi.so 00:04:56.832 SO libspdk.so.6.0 00:04:56.832 SYMLINK libspdk.so 00:04:56.832 TEST_HEADER include/spdk/accel.h 00:04:56.832 CXX app/trace/trace.o 00:04:56.832 TEST_HEADER include/spdk/accel_module.h 00:04:56.832 TEST_HEADER include/spdk/assert.h 00:04:56.832 TEST_HEADER include/spdk/barrier.h 00:04:56.832 TEST_HEADER include/spdk/base64.h 00:04:56.832 CC test/rpc_client/rpc_client_test.o 00:04:56.832 TEST_HEADER include/spdk/bdev.h 00:04:56.832 TEST_HEADER include/spdk/bdev_module.h 00:04:56.832 TEST_HEADER include/spdk/bdev_zone.h 00:04:56.832 TEST_HEADER include/spdk/bit_array.h 00:04:56.832 TEST_HEADER include/spdk/bit_pool.h 00:04:56.832 TEST_HEADER include/spdk/blob_bdev.h 00:04:56.832 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:56.832 TEST_HEADER include/spdk/blobfs.h 00:04:56.832 TEST_HEADER include/spdk/blob.h 00:04:56.832 TEST_HEADER include/spdk/conf.h 00:04:56.832 TEST_HEADER include/spdk/config.h 00:04:56.832 TEST_HEADER include/spdk/cpuset.h 00:04:56.832 TEST_HEADER include/spdk/crc16.h 00:04:56.832 TEST_HEADER include/spdk/crc32.h 00:04:56.832 TEST_HEADER include/spdk/crc64.h 00:04:56.832 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:56.832 TEST_HEADER include/spdk/dif.h 00:04:56.832 TEST_HEADER include/spdk/dma.h 00:04:56.832 TEST_HEADER include/spdk/endian.h 00:04:56.832 TEST_HEADER include/spdk/env_dpdk.h 00:04:56.832 TEST_HEADER include/spdk/env.h 00:04:56.832 TEST_HEADER include/spdk/event.h 00:04:56.832 TEST_HEADER include/spdk/fd_group.h 00:04:56.832 TEST_HEADER include/spdk/fd.h 00:04:56.832 TEST_HEADER include/spdk/file.h 00:04:56.832 TEST_HEADER include/spdk/fsdev.h 00:04:56.832 TEST_HEADER include/spdk/fsdev_module.h 00:04:56.832 TEST_HEADER include/spdk/ftl.h 00:04:56.832 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:56.832 TEST_HEADER include/spdk/gpt_spec.h 00:04:56.832 TEST_HEADER include/spdk/hexlify.h 00:04:56.832 TEST_HEADER include/spdk/histogram_data.h 00:04:56.832 CC test/thread/poller_perf/poller_perf.o 00:04:56.832 TEST_HEADER include/spdk/idxd.h 00:04:56.832 TEST_HEADER include/spdk/idxd_spec.h 00:04:56.832 CC examples/ioat/perf/perf.o 00:04:56.832 TEST_HEADER include/spdk/init.h 00:04:56.832 TEST_HEADER include/spdk/ioat.h 00:04:56.832 TEST_HEADER include/spdk/ioat_spec.h 00:04:56.832 CC examples/util/zipf/zipf.o 00:04:56.832 TEST_HEADER include/spdk/iscsi_spec.h 00:04:57.091 TEST_HEADER include/spdk/json.h 00:04:57.091 TEST_HEADER include/spdk/jsonrpc.h 00:04:57.091 TEST_HEADER include/spdk/keyring.h 00:04:57.091 TEST_HEADER include/spdk/keyring_module.h 00:04:57.091 TEST_HEADER include/spdk/likely.h 00:04:57.091 TEST_HEADER include/spdk/log.h 00:04:57.091 TEST_HEADER include/spdk/lvol.h 00:04:57.091 TEST_HEADER include/spdk/md5.h 00:04:57.091 TEST_HEADER include/spdk/memory.h 00:04:57.091 TEST_HEADER include/spdk/mmio.h 00:04:57.091 TEST_HEADER include/spdk/nbd.h 00:04:57.091 TEST_HEADER include/spdk/net.h 00:04:57.091 TEST_HEADER include/spdk/notify.h 00:04:57.091 TEST_HEADER include/spdk/nvme.h 00:04:57.091 CC test/app/bdev_svc/bdev_svc.o 00:04:57.091 TEST_HEADER include/spdk/nvme_intel.h 00:04:57.091 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:57.091 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:57.091 TEST_HEADER include/spdk/nvme_spec.h 00:04:57.091 TEST_HEADER include/spdk/nvme_zns.h 00:04:57.091 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:57.091 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:57.091 TEST_HEADER include/spdk/nvmf.h 00:04:57.091 TEST_HEADER include/spdk/nvmf_spec.h 00:04:57.091 TEST_HEADER include/spdk/nvmf_transport.h 00:04:57.091 TEST_HEADER include/spdk/opal.h 00:04:57.091 TEST_HEADER include/spdk/opal_spec.h 00:04:57.091 CC test/env/mem_callbacks/mem_callbacks.o 00:04:57.091 TEST_HEADER include/spdk/pci_ids.h 00:04:57.091 CC test/dma/test_dma/test_dma.o 00:04:57.091 TEST_HEADER include/spdk/pipe.h 00:04:57.091 TEST_HEADER include/spdk/queue.h 00:04:57.091 TEST_HEADER include/spdk/reduce.h 00:04:57.091 TEST_HEADER include/spdk/rpc.h 00:04:57.091 TEST_HEADER include/spdk/scheduler.h 00:04:57.091 TEST_HEADER include/spdk/scsi.h 00:04:57.091 TEST_HEADER include/spdk/scsi_spec.h 00:04:57.091 TEST_HEADER include/spdk/sock.h 00:04:57.091 TEST_HEADER include/spdk/stdinc.h 00:04:57.091 TEST_HEADER include/spdk/string.h 00:04:57.091 TEST_HEADER include/spdk/thread.h 00:04:57.091 TEST_HEADER include/spdk/trace.h 00:04:57.091 TEST_HEADER include/spdk/trace_parser.h 00:04:57.091 TEST_HEADER include/spdk/tree.h 00:04:57.091 TEST_HEADER include/spdk/ublk.h 00:04:57.091 TEST_HEADER include/spdk/util.h 00:04:57.091 TEST_HEADER include/spdk/uuid.h 00:04:57.091 TEST_HEADER include/spdk/version.h 00:04:57.091 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:57.091 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:57.091 TEST_HEADER include/spdk/vhost.h 00:04:57.091 TEST_HEADER include/spdk/vmd.h 00:04:57.091 TEST_HEADER include/spdk/xor.h 00:04:57.091 TEST_HEADER include/spdk/zipf.h 00:04:57.091 CXX test/cpp_headers/accel.o 00:04:57.091 LINK rpc_client_test 00:04:57.091 LINK poller_perf 00:04:57.091 LINK zipf 00:04:57.091 LINK bdev_svc 00:04:57.091 LINK interrupt_tgt 00:04:57.091 LINK ioat_perf 00:04:57.091 LINK mem_callbacks 00:04:57.091 CXX test/cpp_headers/accel_module.o 00:04:57.349 LINK spdk_trace 00:04:57.349 CC test/app/histogram_perf/histogram_perf.o 00:04:57.349 CXX test/cpp_headers/assert.o 00:04:57.349 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:57.349 CC test/env/vtophys/vtophys.o 00:04:57.349 CC examples/ioat/verify/verify.o 00:04:57.349 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:57.349 CC test/env/memory/memory_ut.o 00:04:57.349 CC test/event/event_perf/event_perf.o 00:04:57.349 CC app/trace_record/trace_record.o 00:04:57.349 LINK histogram_perf 00:04:57.349 CXX test/cpp_headers/barrier.o 00:04:57.349 LINK env_dpdk_post_init 00:04:57.607 LINK test_dma 00:04:57.607 LINK vtophys 00:04:57.607 LINK event_perf 00:04:57.607 LINK verify 00:04:57.607 CXX test/cpp_headers/base64.o 00:04:57.607 LINK spdk_trace_record 00:04:57.607 CC app/nvmf_tgt/nvmf_main.o 00:04:57.607 CC test/app/jsoncat/jsoncat.o 00:04:57.607 CC test/app/stub/stub.o 00:04:57.607 CC test/event/reactor/reactor.o 00:04:57.607 CC test/env/pci/pci_ut.o 00:04:57.607 LINK nvme_fuzz 00:04:57.607 CXX test/cpp_headers/bdev.o 00:04:57.864 LINK jsoncat 00:04:57.864 LINK nvmf_tgt 00:04:57.864 LINK reactor 00:04:57.864 CC examples/thread/thread/thread_ex.o 00:04:57.864 LINK stub 00:04:57.864 CXX test/cpp_headers/bdev_module.o 00:04:57.864 CXX test/cpp_headers/bdev_zone.o 00:04:57.864 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:57.864 CC test/accel/dif/dif.o 00:04:57.864 CXX test/cpp_headers/bit_array.o 00:04:58.123 CC test/event/reactor_perf/reactor_perf.o 00:04:58.123 CC app/iscsi_tgt/iscsi_tgt.o 00:04:58.123 LINK thread 00:04:58.123 CXX test/cpp_headers/bit_pool.o 00:04:58.123 LINK reactor_perf 00:04:58.123 LINK pci_ut 00:04:58.123 LINK memory_ut 00:04:58.123 CC app/spdk_tgt/spdk_tgt.o 00:04:58.123 CC test/blobfs/mkfs/mkfs.o 00:04:58.382 CXX test/cpp_headers/blob_bdev.o 00:04:58.382 LINK iscsi_tgt 00:04:58.382 CXX test/cpp_headers/blobfs_bdev.o 00:04:58.382 CC test/event/app_repeat/app_repeat.o 00:04:58.382 CC examples/sock/hello_world/hello_sock.o 00:04:58.382 CC test/event/scheduler/scheduler.o 00:04:58.382 LINK mkfs 00:04:58.382 LINK spdk_tgt 00:04:58.382 LINK app_repeat 00:04:58.382 CXX test/cpp_headers/blobfs.o 00:04:58.640 CXX test/cpp_headers/blob.o 00:04:58.640 LINK dif 00:04:58.640 CC test/nvme/aer/aer.o 00:04:58.640 CXX test/cpp_headers/conf.o 00:04:58.640 CC app/spdk_lspci/spdk_lspci.o 00:04:58.640 LINK scheduler 00:04:58.640 CC test/lvol/esnap/esnap.o 00:04:58.640 LINK hello_sock 00:04:58.640 CXX test/cpp_headers/config.o 00:04:58.640 LINK spdk_lspci 00:04:58.640 CXX test/cpp_headers/cpuset.o 00:04:58.640 CC test/nvme/reset/reset.o 00:04:58.640 CC test/nvme/sgl/sgl.o 00:04:58.640 CXX test/cpp_headers/crc16.o 00:04:58.898 CC test/nvme/e2edp/nvme_dp.o 00:04:58.898 LINK aer 00:04:58.898 CC examples/vmd/lsvmd/lsvmd.o 00:04:58.898 CXX test/cpp_headers/crc32.o 00:04:58.898 CC app/spdk_nvme_perf/perf.o 00:04:58.898 LINK reset 00:04:58.898 LINK sgl 00:04:58.898 LINK lsvmd 00:04:59.155 LINK nvme_dp 00:04:59.155 CXX test/cpp_headers/crc64.o 00:04:59.155 CC test/nvme/overhead/overhead.o 00:04:59.155 CC test/bdev/bdevio/bdevio.o 00:04:59.155 CXX test/cpp_headers/dif.o 00:04:59.155 CXX test/cpp_headers/dma.o 00:04:59.155 CXX test/cpp_headers/endian.o 00:04:59.155 CC examples/vmd/led/led.o 00:04:59.155 CC test/nvme/err_injection/err_injection.o 00:04:59.155 CXX test/cpp_headers/env_dpdk.o 00:04:59.413 CXX test/cpp_headers/env.o 00:04:59.413 LINK overhead 00:04:59.413 LINK led 00:04:59.413 LINK err_injection 00:04:59.413 CC app/spdk_nvme_identify/identify.o 00:04:59.413 CXX test/cpp_headers/event.o 00:04:59.413 LINK bdevio 00:04:59.413 CXX test/cpp_headers/fd_group.o 00:04:59.413 CC app/spdk_nvme_discover/discovery_aer.o 00:04:59.413 LINK iscsi_fuzz 00:04:59.669 CXX test/cpp_headers/fd.o 00:04:59.669 CC test/nvme/startup/startup.o 00:04:59.669 CC examples/idxd/perf/perf.o 00:04:59.669 LINK spdk_nvme_discover 00:04:59.669 CC app/spdk_top/spdk_top.o 00:04:59.669 CXX test/cpp_headers/file.o 00:04:59.669 CC app/vhost/vhost.o 00:04:59.669 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:59.669 CXX test/cpp_headers/fsdev.o 00:04:59.669 LINK startup 00:04:59.669 LINK spdk_nvme_perf 00:04:59.928 CXX test/cpp_headers/fsdev_module.o 00:04:59.928 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:59.928 LINK vhost 00:04:59.928 CXX test/cpp_headers/ftl.o 00:04:59.928 LINK idxd_perf 00:04:59.928 CC test/nvme/reserve/reserve.o 00:04:59.928 CC app/spdk_dd/spdk_dd.o 00:05:00.187 CXX test/cpp_headers/fuse_dispatcher.o 00:05:00.187 CC app/fio/nvme/fio_plugin.o 00:05:00.187 LINK reserve 00:05:00.187 LINK spdk_nvme_identify 00:05:00.187 CXX test/cpp_headers/gpt_spec.o 00:05:00.187 CC examples/accel/perf/accel_perf.o 00:05:00.187 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:00.187 LINK vhost_fuzz 00:05:00.445 CXX test/cpp_headers/hexlify.o 00:05:00.445 LINK spdk_dd 00:05:00.445 CC test/nvme/simple_copy/simple_copy.o 00:05:00.445 LINK spdk_top 00:05:00.445 CXX test/cpp_headers/histogram_data.o 00:05:00.445 LINK hello_fsdev 00:05:00.445 CC examples/blob/hello_world/hello_blob.o 00:05:00.702 CC examples/nvme/hello_world/hello_world.o 00:05:00.702 LINK simple_copy 00:05:00.702 CC examples/blob/cli/blobcli.o 00:05:00.702 CXX test/cpp_headers/idxd.o 00:05:00.702 LINK accel_perf 00:05:00.702 CC app/fio/bdev/fio_plugin.o 00:05:00.702 CC test/nvme/connect_stress/connect_stress.o 00:05:00.702 LINK spdk_nvme 00:05:00.702 CXX test/cpp_headers/idxd_spec.o 00:05:00.702 LINK hello_blob 00:05:00.702 LINK hello_world 00:05:00.959 CC test/nvme/boot_partition/boot_partition.o 00:05:00.959 LINK connect_stress 00:05:00.959 CXX test/cpp_headers/init.o 00:05:00.959 CC test/nvme/compliance/nvme_compliance.o 00:05:00.959 CC test/nvme/fused_ordering/fused_ordering.o 00:05:00.959 CC examples/nvme/reconnect/reconnect.o 00:05:00.959 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:00.959 LINK boot_partition 00:05:00.959 CXX test/cpp_headers/ioat.o 00:05:01.217 LINK spdk_bdev 00:05:01.217 CC examples/nvme/arbitration/arbitration.o 00:05:01.217 LINK blobcli 00:05:01.217 CXX test/cpp_headers/ioat_spec.o 00:05:01.217 LINK fused_ordering 00:05:01.217 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:01.217 LINK nvme_compliance 00:05:01.217 CC examples/nvme/hotplug/hotplug.o 00:05:01.217 CXX test/cpp_headers/iscsi_spec.o 00:05:01.217 LINK reconnect 00:05:01.475 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:01.475 LINK doorbell_aers 00:05:01.475 LINK arbitration 00:05:01.475 CXX test/cpp_headers/json.o 00:05:01.475 CC examples/nvme/abort/abort.o 00:05:01.475 CXX test/cpp_headers/jsonrpc.o 00:05:01.475 CC examples/bdev/hello_world/hello_bdev.o 00:05:01.475 LINK hotplug 00:05:01.475 LINK nvme_manage 00:05:01.475 LINK cmb_copy 00:05:01.475 CC test/nvme/fdp/fdp.o 00:05:01.733 CXX test/cpp_headers/keyring.o 00:05:01.733 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:01.733 CXX test/cpp_headers/keyring_module.o 00:05:01.733 LINK hello_bdev 00:05:01.733 CC test/nvme/cuse/cuse.o 00:05:01.733 CXX test/cpp_headers/likely.o 00:05:01.733 CC examples/bdev/bdevperf/bdevperf.o 00:05:01.733 CXX test/cpp_headers/log.o 00:05:01.733 CXX test/cpp_headers/lvol.o 00:05:01.733 CXX test/cpp_headers/md5.o 00:05:01.733 LINK pmr_persistence 00:05:01.733 LINK abort 00:05:01.733 CXX test/cpp_headers/memory.o 00:05:01.733 CXX test/cpp_headers/mmio.o 00:05:01.991 CXX test/cpp_headers/nbd.o 00:05:01.991 CXX test/cpp_headers/net.o 00:05:01.991 CXX test/cpp_headers/notify.o 00:05:01.991 LINK fdp 00:05:01.991 CXX test/cpp_headers/nvme.o 00:05:01.991 CXX test/cpp_headers/nvme_intel.o 00:05:01.991 CXX test/cpp_headers/nvme_ocssd.o 00:05:01.991 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:01.991 CXX test/cpp_headers/nvme_spec.o 00:05:01.991 CXX test/cpp_headers/nvme_zns.o 00:05:01.991 CXX test/cpp_headers/nvmf_cmd.o 00:05:01.991 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:01.991 CXX test/cpp_headers/nvmf.o 00:05:01.991 CXX test/cpp_headers/nvmf_spec.o 00:05:02.248 CXX test/cpp_headers/nvmf_transport.o 00:05:02.248 CXX test/cpp_headers/opal.o 00:05:02.248 CXX test/cpp_headers/opal_spec.o 00:05:02.249 CXX test/cpp_headers/pci_ids.o 00:05:02.249 CXX test/cpp_headers/pipe.o 00:05:02.249 CXX test/cpp_headers/queue.o 00:05:02.249 CXX test/cpp_headers/reduce.o 00:05:02.249 CXX test/cpp_headers/rpc.o 00:05:02.249 CXX test/cpp_headers/scheduler.o 00:05:02.249 CXX test/cpp_headers/scsi.o 00:05:02.249 CXX test/cpp_headers/scsi_spec.o 00:05:02.249 CXX test/cpp_headers/sock.o 00:05:02.249 CXX test/cpp_headers/stdinc.o 00:05:02.249 CXX test/cpp_headers/string.o 00:05:02.506 CXX test/cpp_headers/thread.o 00:05:02.506 CXX test/cpp_headers/trace.o 00:05:02.506 CXX test/cpp_headers/trace_parser.o 00:05:02.506 CXX test/cpp_headers/tree.o 00:05:02.506 CXX test/cpp_headers/ublk.o 00:05:02.506 CXX test/cpp_headers/util.o 00:05:02.506 CXX test/cpp_headers/uuid.o 00:05:02.506 LINK bdevperf 00:05:02.506 CXX test/cpp_headers/version.o 00:05:02.506 CXX test/cpp_headers/vfio_user_pci.o 00:05:02.506 CXX test/cpp_headers/vfio_user_spec.o 00:05:02.506 CXX test/cpp_headers/vhost.o 00:05:02.506 CXX test/cpp_headers/vmd.o 00:05:02.506 CXX test/cpp_headers/xor.o 00:05:02.506 CXX test/cpp_headers/zipf.o 00:05:02.765 LINK cuse 00:05:02.765 CC examples/nvmf/nvmf/nvmf.o 00:05:03.024 LINK nvmf 00:05:03.695 LINK esnap 00:05:03.954 00:05:03.954 real 1m4.093s 00:05:03.954 user 5m10.215s 00:05:03.954 sys 0m50.003s 00:05:03.954 10:10:43 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:03.954 ************************************ 00:05:03.954 10:10:43 make -- common/autotest_common.sh@10 -- $ set +x 00:05:03.954 END TEST make 00:05:03.954 ************************************ 00:05:03.954 10:10:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:03.954 10:10:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:03.954 10:10:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:03.954 10:10:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.954 10:10:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:03.954 10:10:43 -- pm/common@44 -- $ pid=5816 00:05:03.954 10:10:43 -- pm/common@50 -- $ kill -TERM 5816 00:05:03.954 10:10:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.954 10:10:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:03.954 10:10:43 -- pm/common@44 -- $ pid=5818 00:05:03.954 10:10:43 -- pm/common@50 -- $ kill -TERM 5818 00:05:03.954 10:10:43 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:03.954 10:10:43 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:03.954 10:10:43 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.954 10:10:43 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.954 10:10:43 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.954 10:10:43 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.954 10:10:43 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.954 10:10:43 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.954 10:10:43 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.954 10:10:43 -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.954 10:10:43 -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.954 10:10:43 -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.954 10:10:43 -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.954 10:10:43 -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.954 10:10:43 -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.954 10:10:43 -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.954 10:10:43 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.954 10:10:43 -- scripts/common.sh@344 -- # case "$op" in 00:05:03.954 10:10:43 -- scripts/common.sh@345 -- # : 1 00:05:03.954 10:10:43 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.954 10:10:43 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.954 10:10:43 -- scripts/common.sh@365 -- # decimal 1 00:05:03.954 10:10:43 -- scripts/common.sh@353 -- # local d=1 00:05:03.954 10:10:43 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.954 10:10:43 -- scripts/common.sh@355 -- # echo 1 00:05:03.954 10:10:43 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.954 10:10:43 -- scripts/common.sh@366 -- # decimal 2 00:05:03.954 10:10:43 -- scripts/common.sh@353 -- # local d=2 00:05:03.954 10:10:43 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.954 10:10:43 -- scripts/common.sh@355 -- # echo 2 00:05:03.954 10:10:43 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.954 10:10:43 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.954 10:10:43 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.954 10:10:43 -- scripts/common.sh@368 -- # return 0 00:05:03.954 10:10:43 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.954 10:10:43 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.954 --rc genhtml_branch_coverage=1 00:05:03.954 --rc genhtml_function_coverage=1 00:05:03.954 --rc genhtml_legend=1 00:05:03.954 --rc geninfo_all_blocks=1 00:05:03.955 --rc geninfo_unexecuted_blocks=1 00:05:03.955 00:05:03.955 ' 00:05:03.955 10:10:43 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.955 --rc genhtml_branch_coverage=1 00:05:03.955 --rc genhtml_function_coverage=1 00:05:03.955 --rc genhtml_legend=1 00:05:03.955 --rc geninfo_all_blocks=1 00:05:03.955 --rc geninfo_unexecuted_blocks=1 00:05:03.955 00:05:03.955 ' 00:05:03.955 10:10:43 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.955 --rc genhtml_branch_coverage=1 00:05:03.955 --rc genhtml_function_coverage=1 00:05:03.955 --rc genhtml_legend=1 00:05:03.955 --rc geninfo_all_blocks=1 00:05:03.955 --rc geninfo_unexecuted_blocks=1 00:05:03.955 00:05:03.955 ' 00:05:03.955 10:10:43 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.955 --rc genhtml_branch_coverage=1 00:05:03.955 --rc genhtml_function_coverage=1 00:05:03.955 --rc genhtml_legend=1 00:05:03.955 --rc geninfo_all_blocks=1 00:05:03.955 --rc geninfo_unexecuted_blocks=1 00:05:03.955 00:05:03.955 ' 00:05:03.955 10:10:43 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:03.955 10:10:43 -- nvmf/common.sh@7 -- # uname -s 00:05:03.955 10:10:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.955 10:10:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.955 10:10:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.955 10:10:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.955 10:10:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.955 10:10:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.955 10:10:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.955 10:10:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.955 10:10:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.955 10:10:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.214 10:10:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3994a1af-dd19-4228-ab77-2da8b76d5ca6 00:05:04.214 10:10:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=3994a1af-dd19-4228-ab77-2da8b76d5ca6 00:05:04.214 10:10:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.214 10:10:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.214 10:10:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:04.214 10:10:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.214 10:10:43 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:04.214 10:10:43 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.214 10:10:43 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.214 10:10:43 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.214 10:10:43 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.214 10:10:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.214 10:10:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.214 10:10:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.214 10:10:43 -- paths/export.sh@5 -- # export PATH 00:05:04.214 10:10:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.214 10:10:43 -- nvmf/common.sh@51 -- # : 0 00:05:04.214 10:10:43 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.214 10:10:43 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.214 10:10:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.214 10:10:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.214 10:10:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.214 10:10:43 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.214 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.214 10:10:43 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.214 10:10:43 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.214 10:10:43 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.214 10:10:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:04.214 10:10:43 -- spdk/autotest.sh@32 -- # uname -s 00:05:04.214 10:10:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:04.214 10:10:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:04.214 10:10:43 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:04.214 10:10:43 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:04.214 10:10:43 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:04.214 10:10:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:04.214 10:10:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:04.214 10:10:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:04.214 10:10:43 -- spdk/autotest.sh@48 -- # udevadm_pid=66240 00:05:04.214 10:10:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:04.214 10:10:43 -- pm/common@17 -- # local monitor 00:05:04.214 10:10:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.214 10:10:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.214 10:10:43 -- pm/common@25 -- # sleep 1 00:05:04.214 10:10:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:04.214 10:10:43 -- pm/common@21 -- # date +%s 00:05:04.214 10:10:43 -- pm/common@21 -- # date +%s 00:05:04.214 10:10:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732875043 00:05:04.214 10:10:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732875043 00:05:04.215 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732875043_collect-cpu-load.pm.log 00:05:04.215 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732875043_collect-vmstat.pm.log 00:05:05.155 10:10:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:05.155 10:10:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:05.155 10:10:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.155 10:10:44 -- common/autotest_common.sh@10 -- # set +x 00:05:05.155 10:10:44 -- spdk/autotest.sh@59 -- # create_test_list 00:05:05.155 10:10:44 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:05.155 10:10:44 -- common/autotest_common.sh@10 -- # set +x 00:05:05.155 10:10:44 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:05.155 10:10:44 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:05.155 10:10:44 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:05.155 10:10:44 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:05.155 10:10:44 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:05.155 10:10:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:05.155 10:10:44 -- common/autotest_common.sh@1457 -- # uname 00:05:05.155 10:10:44 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:05.155 10:10:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:05.155 10:10:44 -- common/autotest_common.sh@1477 -- # uname 00:05:05.155 10:10:44 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:05.155 10:10:44 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:05.155 10:10:44 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:05.155 lcov: LCOV version 1.15 00:05:05.155 10:10:44 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:20.061 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:20.061 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:34.982 10:11:14 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:34.982 10:11:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.982 10:11:14 -- common/autotest_common.sh@10 -- # set +x 00:05:34.982 10:11:14 -- spdk/autotest.sh@78 -- # rm -f 00:05:34.982 10:11:14 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:35.554 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.126 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:36.126 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:36.126 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:36.126 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:36.126 10:11:15 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:36.126 10:11:15 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:36.126 10:11:15 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:36.126 10:11:15 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:36.126 10:11:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:36.126 10:11:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:36.126 10:11:15 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:36.126 10:11:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:36.126 10:11:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:36.126 10:11:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:36.126 10:11:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:36.126 10:11:15 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:36.126 10:11:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:36.126 10:11:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:36.126 10:11:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:36.126 10:11:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:36.126 10:11:15 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:36.126 10:11:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:36.126 10:11:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:36.126 10:11:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:36.126 10:11:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:05:36.126 10:11:15 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:36.126 10:11:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:36.126 10:11:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:36.126 10:11:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:36.126 10:11:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:05:36.126 10:11:15 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:36.126 10:11:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:36.126 10:11:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:36.126 10:11:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:36.126 10:11:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:05:36.126 10:11:15 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:36.126 10:11:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:36.126 10:11:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:36.126 10:11:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:36.126 10:11:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:36.126 10:11:15 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:36.126 10:11:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:36.126 10:11:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:36.126 10:11:15 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:36.126 10:11:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:36.126 10:11:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:36.126 10:11:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:36.126 10:11:15 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:36.126 10:11:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:36.126 No valid GPT data, bailing 00:05:36.126 10:11:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:36.126 10:11:15 -- scripts/common.sh@394 -- # pt= 00:05:36.126 10:11:15 -- scripts/common.sh@395 -- # return 1 00:05:36.126 10:11:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:36.126 1+0 records in 00:05:36.126 1+0 records out 00:05:36.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299185 s, 35.0 MB/s 00:05:36.126 10:11:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:36.126 10:11:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:36.126 10:11:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:36.126 10:11:15 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:36.126 10:11:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:36.126 No valid GPT data, bailing 00:05:36.387 10:11:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:36.387 10:11:15 -- scripts/common.sh@394 -- # pt= 00:05:36.387 10:11:15 -- scripts/common.sh@395 -- # return 1 00:05:36.387 10:11:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:36.387 1+0 records in 00:05:36.387 1+0 records out 00:05:36.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481132 s, 218 MB/s 00:05:36.387 10:11:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:36.387 10:11:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:36.387 10:11:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:36.387 10:11:15 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:36.387 10:11:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:36.387 No valid GPT data, bailing 00:05:36.387 10:11:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:36.387 10:11:15 -- scripts/common.sh@394 -- # pt= 00:05:36.387 10:11:15 -- scripts/common.sh@395 -- # return 1 00:05:36.387 10:11:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:36.387 1+0 records in 00:05:36.387 1+0 records out 00:05:36.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00577262 s, 182 MB/s 00:05:36.387 10:11:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:36.387 10:11:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:36.387 10:11:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:36.387 10:11:15 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:36.387 10:11:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:36.387 No valid GPT data, bailing 00:05:36.387 10:11:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:36.387 10:11:15 -- scripts/common.sh@394 -- # pt= 00:05:36.387 10:11:15 -- scripts/common.sh@395 -- # return 1 00:05:36.387 10:11:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:36.387 1+0 records in 00:05:36.387 1+0 records out 00:05:36.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00517001 s, 203 MB/s 00:05:36.387 10:11:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:36.387 10:11:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:36.387 10:11:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:36.387 10:11:15 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:36.387 10:11:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:36.648 No valid GPT data, bailing 00:05:36.648 10:11:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:36.648 10:11:15 -- scripts/common.sh@394 -- # pt= 00:05:36.648 10:11:15 -- scripts/common.sh@395 -- # return 1 00:05:36.648 10:11:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:36.648 1+0 records in 00:05:36.648 1+0 records out 00:05:36.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00566972 s, 185 MB/s 00:05:36.648 10:11:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:36.648 10:11:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:36.648 10:11:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:36.648 10:11:15 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:36.648 10:11:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:36.648 No valid GPT data, bailing 00:05:36.648 10:11:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:36.648 10:11:15 -- scripts/common.sh@394 -- # pt= 00:05:36.648 10:11:15 -- scripts/common.sh@395 -- # return 1 00:05:36.648 10:11:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:36.648 1+0 records in 00:05:36.648 1+0 records out 00:05:36.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00625406 s, 168 MB/s 00:05:36.648 10:11:15 -- spdk/autotest.sh@105 -- # sync 00:05:36.648 10:11:16 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:36.648 10:11:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:36.648 10:11:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:38.566 10:11:17 -- spdk/autotest.sh@111 -- # uname -s 00:05:38.566 10:11:17 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:38.566 10:11:17 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:38.566 10:11:17 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:38.826 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.398 Hugepages 00:05:39.398 node hugesize free / total 00:05:39.398 node0 1048576kB 0 / 0 00:05:39.398 node0 2048kB 0 / 0 00:05:39.398 00:05:39.398 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:39.398 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:39.398 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:39.398 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:39.658 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:39.659 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:39.659 10:11:18 -- spdk/autotest.sh@117 -- # uname -s 00:05:39.659 10:11:18 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:39.659 10:11:18 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:39.659 10:11:18 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.232 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.804 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.804 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.804 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.804 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.804 10:11:20 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:42.191 10:11:21 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:42.191 10:11:21 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:42.191 10:11:21 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:42.191 10:11:21 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:42.191 10:11:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:42.191 10:11:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:42.191 10:11:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:42.191 10:11:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:42.191 10:11:21 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:42.191 10:11:21 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:42.191 10:11:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:42.191 10:11:21 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:42.191 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.453 Waiting for block devices as requested 00:05:42.453 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:42.715 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:42.715 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:42.715 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:47.981 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:47.981 10:11:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:47.981 10:11:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:47.981 10:11:27 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:47.981 10:11:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:47.981 10:11:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:47.981 10:11:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:47.981 10:11:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:47.981 10:11:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:47.981 10:11:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:47.981 10:11:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:47.981 10:11:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:47.981 10:11:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:47.981 10:11:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:47.981 10:11:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:47.981 10:11:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:47.981 10:11:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:47.981 10:11:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:47.981 10:11:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:47.981 10:11:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:47.981 10:11:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:47.981 10:11:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:47.981 10:11:27 -- common/autotest_common.sh@1543 -- # continue 00:05:47.981 10:11:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:47.981 10:11:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:47.981 10:11:27 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:47.981 10:11:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:47.981 10:11:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:47.981 10:11:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:47.981 10:11:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:47.981 10:11:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:47.981 10:11:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:47.981 10:11:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:47.981 10:11:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:47.981 10:11:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:47.981 10:11:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:47.981 10:11:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:47.981 10:11:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:47.981 10:11:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:47.981 10:11:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:47.981 10:11:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:47.981 10:11:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:47.981 10:11:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:47.981 10:11:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:47.981 10:11:27 -- common/autotest_common.sh@1543 -- # continue 00:05:47.981 10:11:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:47.981 10:11:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:47.981 10:11:27 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:47.981 10:11:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:47.981 10:11:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:47.981 10:11:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:47.981 10:11:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:47.981 10:11:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:47.981 10:11:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:47.981 10:11:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:47.981 10:11:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:47.981 10:11:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:47.981 10:11:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:47.981 10:11:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:47.981 10:11:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:47.981 10:11:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:47.981 10:11:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:47.981 10:11:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:47.981 10:11:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:47.981 10:11:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:47.981 10:11:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:47.981 10:11:27 -- common/autotest_common.sh@1543 -- # continue 00:05:47.981 10:11:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:47.981 10:11:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:47.981 10:11:27 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:47.981 10:11:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:47.981 10:11:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:47.981 10:11:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:47.981 10:11:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:47.981 10:11:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:47.981 10:11:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:47.981 10:11:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:47.981 10:11:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:47.981 10:11:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:47.981 10:11:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:47.981 10:11:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:47.981 10:11:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:47.981 10:11:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:47.981 10:11:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:47.981 10:11:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:47.981 10:11:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:47.981 10:11:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:47.981 10:11:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:47.981 10:11:27 -- common/autotest_common.sh@1543 -- # continue 00:05:47.981 10:11:27 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:47.981 10:11:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:47.981 10:11:27 -- common/autotest_common.sh@10 -- # set +x 00:05:47.981 10:11:27 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:47.981 10:11:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.981 10:11:27 -- common/autotest_common.sh@10 -- # set +x 00:05:47.982 10:11:27 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:48.548 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:48.806 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:48.806 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:48.806 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:48.806 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:49.065 10:11:28 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:49.065 10:11:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:49.065 10:11:28 -- common/autotest_common.sh@10 -- # set +x 00:05:49.065 10:11:28 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:49.065 10:11:28 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:49.065 10:11:28 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:49.065 10:11:28 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:49.065 10:11:28 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:49.065 10:11:28 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:49.065 10:11:28 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:49.065 10:11:28 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:49.065 10:11:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:49.065 10:11:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:49.065 10:11:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:49.065 10:11:28 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:49.065 10:11:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:49.065 10:11:28 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:49.065 10:11:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:49.065 10:11:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:49.065 10:11:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:49.065 10:11:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:49.065 10:11:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:49.065 10:11:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:49.065 10:11:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:49.065 10:11:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:49.065 10:11:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:49.065 10:11:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:49.065 10:11:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:49.065 10:11:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:49.065 10:11:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:49.065 10:11:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:49.065 10:11:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:49.065 10:11:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:49.065 10:11:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:49.065 10:11:28 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:49.065 10:11:28 -- common/autotest_common.sh@1572 -- # return 0 00:05:49.065 10:11:28 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:49.065 10:11:28 -- common/autotest_common.sh@1580 -- # return 0 00:05:49.065 10:11:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:49.065 10:11:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:49.065 10:11:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:49.065 10:11:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:49.065 10:11:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:49.065 10:11:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.065 10:11:28 -- common/autotest_common.sh@10 -- # set +x 00:05:49.065 10:11:28 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:49.065 10:11:28 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:49.065 10:11:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.065 10:11:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.065 10:11:28 -- common/autotest_common.sh@10 -- # set +x 00:05:49.065 ************************************ 00:05:49.065 START TEST env 00:05:49.065 ************************************ 00:05:49.065 10:11:28 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:49.065 * Looking for test storage... 00:05:49.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:49.065 10:11:28 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.065 10:11:28 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.065 10:11:28 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.324 10:11:28 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.324 10:11:28 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.324 10:11:28 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.324 10:11:28 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.324 10:11:28 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.324 10:11:28 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.324 10:11:28 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.324 10:11:28 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.324 10:11:28 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.324 10:11:28 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.324 10:11:28 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.324 10:11:28 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.324 10:11:28 env -- scripts/common.sh@344 -- # case "$op" in 00:05:49.324 10:11:28 env -- scripts/common.sh@345 -- # : 1 00:05:49.324 10:11:28 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.324 10:11:28 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.324 10:11:28 env -- scripts/common.sh@365 -- # decimal 1 00:05:49.324 10:11:28 env -- scripts/common.sh@353 -- # local d=1 00:05:49.324 10:11:28 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.324 10:11:28 env -- scripts/common.sh@355 -- # echo 1 00:05:49.324 10:11:28 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.324 10:11:28 env -- scripts/common.sh@366 -- # decimal 2 00:05:49.324 10:11:28 env -- scripts/common.sh@353 -- # local d=2 00:05:49.324 10:11:28 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.324 10:11:28 env -- scripts/common.sh@355 -- # echo 2 00:05:49.324 10:11:28 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.324 10:11:28 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.324 10:11:28 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.324 10:11:28 env -- scripts/common.sh@368 -- # return 0 00:05:49.324 10:11:28 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.324 10:11:28 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.324 --rc genhtml_branch_coverage=1 00:05:49.324 --rc genhtml_function_coverage=1 00:05:49.324 --rc genhtml_legend=1 00:05:49.324 --rc geninfo_all_blocks=1 00:05:49.324 --rc geninfo_unexecuted_blocks=1 00:05:49.324 00:05:49.324 ' 00:05:49.324 10:11:28 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.324 --rc genhtml_branch_coverage=1 00:05:49.324 --rc genhtml_function_coverage=1 00:05:49.324 --rc genhtml_legend=1 00:05:49.324 --rc geninfo_all_blocks=1 00:05:49.324 --rc geninfo_unexecuted_blocks=1 00:05:49.324 00:05:49.324 ' 00:05:49.324 10:11:28 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.324 --rc genhtml_branch_coverage=1 00:05:49.324 --rc genhtml_function_coverage=1 00:05:49.324 --rc genhtml_legend=1 00:05:49.324 --rc geninfo_all_blocks=1 00:05:49.324 --rc geninfo_unexecuted_blocks=1 00:05:49.324 00:05:49.324 ' 00:05:49.324 10:11:28 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.324 --rc genhtml_branch_coverage=1 00:05:49.324 --rc genhtml_function_coverage=1 00:05:49.324 --rc genhtml_legend=1 00:05:49.324 --rc geninfo_all_blocks=1 00:05:49.324 --rc geninfo_unexecuted_blocks=1 00:05:49.324 00:05:49.324 ' 00:05:49.324 10:11:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:49.324 10:11:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.324 10:11:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.324 10:11:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.324 ************************************ 00:05:49.324 START TEST env_memory 00:05:49.324 ************************************ 00:05:49.324 10:11:28 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:49.324 00:05:49.324 00:05:49.324 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.324 http://cunit.sourceforge.net/ 00:05:49.324 00:05:49.324 00:05:49.324 Suite: memory 00:05:49.324 Test: alloc and free memory map ...[2024-11-29 10:11:28.619165] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:49.324 passed 00:05:49.324 Test: mem map translation ...[2024-11-29 10:11:28.657707] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:49.324 [2024-11-29 10:11:28.657741] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:49.324 [2024-11-29 10:11:28.657806] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:49.324 [2024-11-29 10:11:28.657819] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:49.324 passed 00:05:49.324 Test: mem map registration ...[2024-11-29 10:11:28.725679] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:49.324 [2024-11-29 10:11:28.725709] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:49.324 passed 00:05:49.582 Test: mem map adjacent registrations ...passed 00:05:49.583 00:05:49.583 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.583 suites 1 1 n/a 0 0 00:05:49.583 tests 4 4 4 0 0 00:05:49.583 asserts 152 152 152 0 n/a 00:05:49.583 00:05:49.583 Elapsed time = 0.232 seconds 00:05:49.583 00:05:49.583 real 0m0.267s 00:05:49.583 user 0m0.237s 00:05:49.583 sys 0m0.021s 00:05:49.583 10:11:28 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.583 10:11:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:49.583 ************************************ 00:05:49.583 END TEST env_memory 00:05:49.583 ************************************ 00:05:49.583 10:11:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:49.583 10:11:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.583 10:11:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.583 10:11:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.583 ************************************ 00:05:49.583 START TEST env_vtophys 00:05:49.583 ************************************ 00:05:49.583 10:11:28 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:49.583 EAL: lib.eal log level changed from notice to debug 00:05:49.583 EAL: Detected lcore 0 as core 0 on socket 0 00:05:49.583 EAL: Detected lcore 1 as core 0 on socket 0 00:05:49.583 EAL: Detected lcore 2 as core 0 on socket 0 00:05:49.583 EAL: Detected lcore 3 as core 0 on socket 0 00:05:49.583 EAL: Detected lcore 4 as core 0 on socket 0 00:05:49.583 EAL: Detected lcore 5 as core 0 on socket 0 00:05:49.583 EAL: Detected lcore 6 as core 0 on socket 0 00:05:49.583 EAL: Detected lcore 7 as core 0 on socket 0 00:05:49.583 EAL: Detected lcore 8 as core 0 on socket 0 00:05:49.583 EAL: Detected lcore 9 as core 0 on socket 0 00:05:49.583 EAL: Maximum logical cores by configuration: 128 00:05:49.583 EAL: Detected CPU lcores: 10 00:05:49.583 EAL: Detected NUMA nodes: 1 00:05:49.583 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:49.583 EAL: Detected shared linkage of DPDK 00:05:49.583 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:49.583 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:49.583 EAL: Registered [vdev] bus. 00:05:49.583 EAL: bus.vdev log level changed from disabled to notice 00:05:49.583 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:49.583 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:49.583 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:49.583 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:49.583 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:49.583 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:49.583 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:49.583 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:49.583 EAL: No shared files mode enabled, IPC will be disabled 00:05:49.583 EAL: No shared files mode enabled, IPC is disabled 00:05:49.583 EAL: Selected IOVA mode 'PA' 00:05:49.583 EAL: Probing VFIO support... 00:05:49.583 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:49.583 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:49.583 EAL: Ask a virtual area of 0x2e000 bytes 00:05:49.583 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:49.583 EAL: Setting up physically contiguous memory... 00:05:49.583 EAL: Setting maximum number of open files to 524288 00:05:49.583 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:49.583 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:49.583 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.583 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:49.583 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.583 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.583 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:49.583 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:49.583 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.583 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:49.583 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.583 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.583 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:49.583 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:49.583 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.583 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:49.583 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.583 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.583 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:49.583 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:49.583 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.583 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:49.583 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.583 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.583 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:49.583 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:49.583 EAL: Hugepages will be freed exactly as allocated. 00:05:49.583 EAL: No shared files mode enabled, IPC is disabled 00:05:49.583 EAL: No shared files mode enabled, IPC is disabled 00:05:49.583 EAL: TSC frequency is ~2600000 KHz 00:05:49.583 EAL: Main lcore 0 is ready (tid=7f3e5c82fa40;cpuset=[0]) 00:05:49.583 EAL: Trying to obtain current memory policy. 00:05:49.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.583 EAL: Restoring previous memory policy: 0 00:05:49.583 EAL: request: mp_malloc_sync 00:05:49.583 EAL: No shared files mode enabled, IPC is disabled 00:05:49.583 EAL: Heap on socket 0 was expanded by 2MB 00:05:49.583 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:49.583 EAL: No shared files mode enabled, IPC is disabled 00:05:49.583 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:49.583 EAL: Mem event callback 'spdk:(nil)' registered 00:05:49.583 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:49.583 00:05:49.583 00:05:49.583 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.583 http://cunit.sourceforge.net/ 00:05:49.583 00:05:49.583 00:05:49.583 Suite: components_suite 00:05:50.150 Test: vtophys_malloc_test ...passed 00:05:50.150 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:50.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.150 EAL: Restoring previous memory policy: 4 00:05:50.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.150 EAL: request: mp_malloc_sync 00:05:50.150 EAL: No shared files mode enabled, IPC is disabled 00:05:50.150 EAL: Heap on socket 0 was expanded by 4MB 00:05:50.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.150 EAL: request: mp_malloc_sync 00:05:50.150 EAL: No shared files mode enabled, IPC is disabled 00:05:50.150 EAL: Heap on socket 0 was shrunk by 4MB 00:05:50.150 EAL: Trying to obtain current memory policy. 00:05:50.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.150 EAL: Restoring previous memory policy: 4 00:05:50.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.150 EAL: request: mp_malloc_sync 00:05:50.150 EAL: No shared files mode enabled, IPC is disabled 00:05:50.150 EAL: Heap on socket 0 was expanded by 6MB 00:05:50.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.150 EAL: request: mp_malloc_sync 00:05:50.150 EAL: No shared files mode enabled, IPC is disabled 00:05:50.150 EAL: Heap on socket 0 was shrunk by 6MB 00:05:50.150 EAL: Trying to obtain current memory policy. 00:05:50.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.150 EAL: Restoring previous memory policy: 4 00:05:50.151 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.151 EAL: request: mp_malloc_sync 00:05:50.151 EAL: No shared files mode enabled, IPC is disabled 00:05:50.151 EAL: Heap on socket 0 was expanded by 10MB 00:05:50.151 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.151 EAL: request: mp_malloc_sync 00:05:50.151 EAL: No shared files mode enabled, IPC is disabled 00:05:50.151 EAL: Heap on socket 0 was shrunk by 10MB 00:05:50.151 EAL: Trying to obtain current memory policy. 00:05:50.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.151 EAL: Restoring previous memory policy: 4 00:05:50.151 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.151 EAL: request: mp_malloc_sync 00:05:50.151 EAL: No shared files mode enabled, IPC is disabled 00:05:50.151 EAL: Heap on socket 0 was expanded by 18MB 00:05:50.151 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.151 EAL: request: mp_malloc_sync 00:05:50.151 EAL: No shared files mode enabled, IPC is disabled 00:05:50.151 EAL: Heap on socket 0 was shrunk by 18MB 00:05:50.151 EAL: Trying to obtain current memory policy. 00:05:50.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.151 EAL: Restoring previous memory policy: 4 00:05:50.151 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.151 EAL: request: mp_malloc_sync 00:05:50.151 EAL: No shared files mode enabled, IPC is disabled 00:05:50.151 EAL: Heap on socket 0 was expanded by 34MB 00:05:50.151 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.151 EAL: request: mp_malloc_sync 00:05:50.151 EAL: No shared files mode enabled, IPC is disabled 00:05:50.151 EAL: Heap on socket 0 was shrunk by 34MB 00:05:50.151 EAL: Trying to obtain current memory policy. 00:05:50.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.151 EAL: Restoring previous memory policy: 4 00:05:50.151 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.151 EAL: request: mp_malloc_sync 00:05:50.151 EAL: No shared files mode enabled, IPC is disabled 00:05:50.151 EAL: Heap on socket 0 was expanded by 66MB 00:05:50.151 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.151 EAL: request: mp_malloc_sync 00:05:50.151 EAL: No shared files mode enabled, IPC is disabled 00:05:50.151 EAL: Heap on socket 0 was shrunk by 66MB 00:05:50.151 EAL: Trying to obtain current memory policy. 00:05:50.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.151 EAL: Restoring previous memory policy: 4 00:05:50.151 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.151 EAL: request: mp_malloc_sync 00:05:50.151 EAL: No shared files mode enabled, IPC is disabled 00:05:50.151 EAL: Heap on socket 0 was expanded by 130MB 00:05:50.151 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.151 EAL: request: mp_malloc_sync 00:05:50.151 EAL: No shared files mode enabled, IPC is disabled 00:05:50.151 EAL: Heap on socket 0 was shrunk by 130MB 00:05:50.151 EAL: Trying to obtain current memory policy. 00:05:50.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.151 EAL: Restoring previous memory policy: 4 00:05:50.151 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.151 EAL: request: mp_malloc_sync 00:05:50.151 EAL: No shared files mode enabled, IPC is disabled 00:05:50.151 EAL: Heap on socket 0 was expanded by 258MB 00:05:50.151 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.151 EAL: request: mp_malloc_sync 00:05:50.151 EAL: No shared files mode enabled, IPC is disabled 00:05:50.151 EAL: Heap on socket 0 was shrunk by 258MB 00:05:50.151 EAL: Trying to obtain current memory policy. 00:05:50.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.151 EAL: Restoring previous memory policy: 4 00:05:50.151 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.151 EAL: request: mp_malloc_sync 00:05:50.151 EAL: No shared files mode enabled, IPC is disabled 00:05:50.151 EAL: Heap on socket 0 was expanded by 514MB 00:05:50.409 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.409 EAL: request: mp_malloc_sync 00:05:50.409 EAL: No shared files mode enabled, IPC is disabled 00:05:50.409 EAL: Heap on socket 0 was shrunk by 514MB 00:05:50.409 EAL: Trying to obtain current memory policy. 00:05:50.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.409 EAL: Restoring previous memory policy: 4 00:05:50.409 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.409 EAL: request: mp_malloc_sync 00:05:50.409 EAL: No shared files mode enabled, IPC is disabled 00:05:50.409 EAL: Heap on socket 0 was expanded by 1026MB 00:05:50.667 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.667 passed 00:05:50.667 00:05:50.667 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.667 suites 1 1 n/a 0 0 00:05:50.668 tests 2 2 2 0 0 00:05:50.668 asserts 5568 5568 5568 0 n/a 00:05:50.668 00:05:50.668 Elapsed time = 0.977 seconds 00:05:50.668 EAL: request: mp_malloc_sync 00:05:50.668 EAL: No shared files mode enabled, IPC is disabled 00:05:50.668 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:50.668 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.668 EAL: request: mp_malloc_sync 00:05:50.668 EAL: No shared files mode enabled, IPC is disabled 00:05:50.668 EAL: Heap on socket 0 was shrunk by 2MB 00:05:50.668 EAL: No shared files mode enabled, IPC is disabled 00:05:50.668 EAL: No shared files mode enabled, IPC is disabled 00:05:50.668 EAL: No shared files mode enabled, IPC is disabled 00:05:50.668 00:05:50.668 real 0m1.192s 00:05:50.668 user 0m0.493s 00:05:50.668 sys 0m0.565s 00:05:50.668 10:11:30 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.668 10:11:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:50.668 ************************************ 00:05:50.668 END TEST env_vtophys 00:05:50.668 ************************************ 00:05:50.668 10:11:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:50.668 10:11:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.668 10:11:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.668 10:11:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.668 ************************************ 00:05:50.668 START TEST env_pci 00:05:50.668 ************************************ 00:05:50.668 10:11:30 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:50.668 00:05:50.668 00:05:50.668 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.668 http://cunit.sourceforge.net/ 00:05:50.668 00:05:50.668 00:05:50.668 Suite: pci 00:05:50.668 Test: pci_hook ...[2024-11-29 10:11:30.119498] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68981 has claimed it 00:05:50.926 EAL: Cannot find device (10000:00:01.0) 00:05:50.926 EAL: Failed to attach device on primary process 00:05:50.926 passed 00:05:50.926 00:05:50.926 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.926 suites 1 1 n/a 0 0 00:05:50.926 tests 1 1 1 0 0 00:05:50.926 asserts 25 25 25 0 n/a 00:05:50.926 00:05:50.926 Elapsed time = 0.004 seconds 00:05:50.926 00:05:50.926 real 0m0.048s 00:05:50.926 user 0m0.026s 00:05:50.926 sys 0m0.022s 00:05:50.926 10:11:30 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.926 10:11:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:50.926 ************************************ 00:05:50.926 END TEST env_pci 00:05:50.926 ************************************ 00:05:50.926 10:11:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:50.926 10:11:30 env -- env/env.sh@15 -- # uname 00:05:50.926 10:11:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:50.926 10:11:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:50.926 10:11:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:50.926 10:11:30 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:50.926 10:11:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.926 10:11:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.926 ************************************ 00:05:50.926 START TEST env_dpdk_post_init 00:05:50.926 ************************************ 00:05:50.926 10:11:30 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:50.926 EAL: Detected CPU lcores: 10 00:05:50.926 EAL: Detected NUMA nodes: 1 00:05:50.926 EAL: Detected shared linkage of DPDK 00:05:50.926 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:50.926 EAL: Selected IOVA mode 'PA' 00:05:50.926 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:50.926 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:50.926 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:50.926 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:50.926 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:51.185 Starting DPDK initialization... 00:05:51.185 Starting SPDK post initialization... 00:05:51.185 SPDK NVMe probe 00:05:51.185 Attaching to 0000:00:10.0 00:05:51.185 Attaching to 0000:00:11.0 00:05:51.185 Attaching to 0000:00:12.0 00:05:51.185 Attaching to 0000:00:13.0 00:05:51.185 Attached to 0000:00:10.0 00:05:51.185 Attached to 0000:00:11.0 00:05:51.185 Attached to 0000:00:13.0 00:05:51.185 Attached to 0000:00:12.0 00:05:51.185 Cleaning up... 00:05:51.185 00:05:51.185 real 0m0.206s 00:05:51.185 user 0m0.056s 00:05:51.185 sys 0m0.052s 00:05:51.185 10:11:30 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.185 10:11:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:51.185 ************************************ 00:05:51.185 END TEST env_dpdk_post_init 00:05:51.185 ************************************ 00:05:51.185 10:11:30 env -- env/env.sh@26 -- # uname 00:05:51.185 10:11:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:51.185 10:11:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:51.185 10:11:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.185 10:11:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.185 10:11:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.185 ************************************ 00:05:51.185 START TEST env_mem_callbacks 00:05:51.185 ************************************ 00:05:51.185 10:11:30 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:51.185 EAL: Detected CPU lcores: 10 00:05:51.185 EAL: Detected NUMA nodes: 1 00:05:51.185 EAL: Detected shared linkage of DPDK 00:05:51.185 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:51.185 EAL: Selected IOVA mode 'PA' 00:05:51.185 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:51.185 00:05:51.185 00:05:51.185 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.185 http://cunit.sourceforge.net/ 00:05:51.185 00:05:51.185 00:05:51.185 Suite: memory 00:05:51.185 Test: test ... 00:05:51.185 register 0x200000200000 2097152 00:05:51.185 malloc 3145728 00:05:51.185 register 0x200000400000 4194304 00:05:51.185 buf 0x200000500000 len 3145728 PASSED 00:05:51.185 malloc 64 00:05:51.185 buf 0x2000004fff40 len 64 PASSED 00:05:51.185 malloc 4194304 00:05:51.185 register 0x200000800000 6291456 00:05:51.185 buf 0x200000a00000 len 4194304 PASSED 00:05:51.185 free 0x200000500000 3145728 00:05:51.185 free 0x2000004fff40 64 00:05:51.185 unregister 0x200000400000 4194304 PASSED 00:05:51.185 free 0x200000a00000 4194304 00:05:51.185 unregister 0x200000800000 6291456 PASSED 00:05:51.185 malloc 8388608 00:05:51.185 register 0x200000400000 10485760 00:05:51.185 buf 0x200000600000 len 8388608 PASSED 00:05:51.185 free 0x200000600000 8388608 00:05:51.185 unregister 0x200000400000 10485760 PASSED 00:05:51.185 passed 00:05:51.185 00:05:51.185 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.185 suites 1 1 n/a 0 0 00:05:51.185 tests 1 1 1 0 0 00:05:51.185 asserts 15 15 15 0 n/a 00:05:51.185 00:05:51.185 Elapsed time = 0.010 seconds 00:05:51.185 00:05:51.185 real 0m0.149s 00:05:51.185 user 0m0.021s 00:05:51.185 sys 0m0.027s 00:05:51.185 10:11:30 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.185 10:11:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:51.185 ************************************ 00:05:51.185 END TEST env_mem_callbacks 00:05:51.185 ************************************ 00:05:51.185 ************************************ 00:05:51.185 END TEST env 00:05:51.185 ************************************ 00:05:51.185 00:05:51.185 real 0m2.201s 00:05:51.185 user 0m0.980s 00:05:51.185 sys 0m0.881s 00:05:51.185 10:11:30 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.185 10:11:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.444 10:11:30 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:51.444 10:11:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.444 10:11:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.444 10:11:30 -- common/autotest_common.sh@10 -- # set +x 00:05:51.444 ************************************ 00:05:51.444 START TEST rpc 00:05:51.444 ************************************ 00:05:51.444 10:11:30 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:51.444 * Looking for test storage... 00:05:51.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:51.444 10:11:30 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.444 10:11:30 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.444 10:11:30 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.444 10:11:30 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.444 10:11:30 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.444 10:11:30 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.444 10:11:30 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.444 10:11:30 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.444 10:11:30 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.444 10:11:30 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.444 10:11:30 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.444 10:11:30 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.444 10:11:30 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.444 10:11:30 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.444 10:11:30 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.444 10:11:30 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:51.444 10:11:30 rpc -- scripts/common.sh@345 -- # : 1 00:05:51.444 10:11:30 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.444 10:11:30 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.444 10:11:30 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:51.444 10:11:30 rpc -- scripts/common.sh@353 -- # local d=1 00:05:51.444 10:11:30 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.445 10:11:30 rpc -- scripts/common.sh@355 -- # echo 1 00:05:51.445 10:11:30 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.445 10:11:30 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:51.445 10:11:30 rpc -- scripts/common.sh@353 -- # local d=2 00:05:51.445 10:11:30 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.445 10:11:30 rpc -- scripts/common.sh@355 -- # echo 2 00:05:51.445 10:11:30 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.445 10:11:30 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.445 10:11:30 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.445 10:11:30 rpc -- scripts/common.sh@368 -- # return 0 00:05:51.445 10:11:30 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.445 10:11:30 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.445 --rc genhtml_branch_coverage=1 00:05:51.445 --rc genhtml_function_coverage=1 00:05:51.445 --rc genhtml_legend=1 00:05:51.445 --rc geninfo_all_blocks=1 00:05:51.445 --rc geninfo_unexecuted_blocks=1 00:05:51.445 00:05:51.445 ' 00:05:51.445 10:11:30 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.445 --rc genhtml_branch_coverage=1 00:05:51.445 --rc genhtml_function_coverage=1 00:05:51.445 --rc genhtml_legend=1 00:05:51.445 --rc geninfo_all_blocks=1 00:05:51.445 --rc geninfo_unexecuted_blocks=1 00:05:51.445 00:05:51.445 ' 00:05:51.445 10:11:30 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.445 --rc genhtml_branch_coverage=1 00:05:51.445 --rc genhtml_function_coverage=1 00:05:51.445 --rc genhtml_legend=1 00:05:51.445 --rc geninfo_all_blocks=1 00:05:51.445 --rc geninfo_unexecuted_blocks=1 00:05:51.445 00:05:51.445 ' 00:05:51.445 10:11:30 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.445 --rc genhtml_branch_coverage=1 00:05:51.445 --rc genhtml_function_coverage=1 00:05:51.445 --rc genhtml_legend=1 00:05:51.445 --rc geninfo_all_blocks=1 00:05:51.445 --rc geninfo_unexecuted_blocks=1 00:05:51.445 00:05:51.445 ' 00:05:51.445 10:11:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69103 00:05:51.445 10:11:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.445 10:11:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69103 00:05:51.445 10:11:30 rpc -- common/autotest_common.sh@835 -- # '[' -z 69103 ']' 00:05:51.445 10:11:30 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.445 10:11:30 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.445 10:11:30 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.445 10:11:30 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.445 10:11:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.445 10:11:30 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:51.445 [2024-11-29 10:11:30.865217] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:51.445 [2024-11-29 10:11:30.865326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69103 ] 00:05:51.704 [2024-11-29 10:11:31.010211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.704 [2024-11-29 10:11:31.028898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:51.704 [2024-11-29 10:11:31.028945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69103' to capture a snapshot of events at runtime. 00:05:51.704 [2024-11-29 10:11:31.028956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:51.704 [2024-11-29 10:11:31.028964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:51.704 [2024-11-29 10:11:31.028974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69103 for offline analysis/debug. 00:05:51.704 [2024-11-29 10:11:31.029265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.271 10:11:31 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.271 10:11:31 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:52.271 10:11:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:52.271 10:11:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:52.271 10:11:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:52.271 10:11:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:52.271 10:11:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.271 10:11:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.271 10:11:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.271 ************************************ 00:05:52.271 START TEST rpc_integrity 00:05:52.271 ************************************ 00:05:52.271 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:52.271 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:52.271 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.271 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.271 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.271 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:52.271 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:52.530 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:52.530 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:52.530 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.530 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.530 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.530 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:52.530 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:52.530 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.530 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.530 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.530 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:52.530 { 00:05:52.530 "name": "Malloc0", 00:05:52.530 "aliases": [ 00:05:52.530 "34fc38f8-368f-4341-af2a-bafd1a909a02" 00:05:52.530 ], 00:05:52.530 "product_name": "Malloc disk", 00:05:52.530 "block_size": 512, 00:05:52.530 "num_blocks": 16384, 00:05:52.530 "uuid": "34fc38f8-368f-4341-af2a-bafd1a909a02", 00:05:52.530 "assigned_rate_limits": { 00:05:52.530 "rw_ios_per_sec": 0, 00:05:52.530 "rw_mbytes_per_sec": 0, 00:05:52.531 "r_mbytes_per_sec": 0, 00:05:52.531 "w_mbytes_per_sec": 0 00:05:52.531 }, 00:05:52.531 "claimed": false, 00:05:52.531 "zoned": false, 00:05:52.531 "supported_io_types": { 00:05:52.531 "read": true, 00:05:52.531 "write": true, 00:05:52.531 "unmap": true, 00:05:52.531 "flush": true, 00:05:52.531 "reset": true, 00:05:52.531 "nvme_admin": false, 00:05:52.531 "nvme_io": false, 00:05:52.531 "nvme_io_md": false, 00:05:52.531 "write_zeroes": true, 00:05:52.531 "zcopy": true, 00:05:52.531 "get_zone_info": false, 00:05:52.531 "zone_management": false, 00:05:52.531 "zone_append": false, 00:05:52.531 "compare": false, 00:05:52.531 "compare_and_write": false, 00:05:52.531 "abort": true, 00:05:52.531 "seek_hole": false, 00:05:52.531 "seek_data": false, 00:05:52.531 "copy": true, 00:05:52.531 "nvme_iov_md": false 00:05:52.531 }, 00:05:52.531 "memory_domains": [ 00:05:52.531 { 00:05:52.531 "dma_device_id": "system", 00:05:52.531 "dma_device_type": 1 00:05:52.531 }, 00:05:52.531 { 00:05:52.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.531 "dma_device_type": 2 00:05:52.531 } 00:05:52.531 ], 00:05:52.531 "driver_specific": {} 00:05:52.531 } 00:05:52.531 ]' 00:05:52.531 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:52.531 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:52.531 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.531 [2024-11-29 10:11:31.809133] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:52.531 [2024-11-29 10:11:31.809190] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:52.531 [2024-11-29 10:11:31.809221] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:52.531 [2024-11-29 10:11:31.809230] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:52.531 [2024-11-29 10:11:31.811462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:52.531 [2024-11-29 10:11:31.811499] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:52.531 Passthru0 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.531 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.531 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:52.531 { 00:05:52.531 "name": "Malloc0", 00:05:52.531 "aliases": [ 00:05:52.531 "34fc38f8-368f-4341-af2a-bafd1a909a02" 00:05:52.531 ], 00:05:52.531 "product_name": "Malloc disk", 00:05:52.531 "block_size": 512, 00:05:52.531 "num_blocks": 16384, 00:05:52.531 "uuid": "34fc38f8-368f-4341-af2a-bafd1a909a02", 00:05:52.531 "assigned_rate_limits": { 00:05:52.531 "rw_ios_per_sec": 0, 00:05:52.531 "rw_mbytes_per_sec": 0, 00:05:52.531 "r_mbytes_per_sec": 0, 00:05:52.531 "w_mbytes_per_sec": 0 00:05:52.531 }, 00:05:52.531 "claimed": true, 00:05:52.531 "claim_type": "exclusive_write", 00:05:52.531 "zoned": false, 00:05:52.531 "supported_io_types": { 00:05:52.531 "read": true, 00:05:52.531 "write": true, 00:05:52.531 "unmap": true, 00:05:52.531 "flush": true, 00:05:52.531 "reset": true, 00:05:52.531 "nvme_admin": false, 00:05:52.531 "nvme_io": false, 00:05:52.531 "nvme_io_md": false, 00:05:52.531 "write_zeroes": true, 00:05:52.531 "zcopy": true, 00:05:52.531 "get_zone_info": false, 00:05:52.531 "zone_management": false, 00:05:52.531 "zone_append": false, 00:05:52.531 "compare": false, 00:05:52.531 "compare_and_write": false, 00:05:52.531 "abort": true, 00:05:52.531 "seek_hole": false, 00:05:52.531 "seek_data": false, 00:05:52.531 "copy": true, 00:05:52.531 "nvme_iov_md": false 00:05:52.531 }, 00:05:52.531 "memory_domains": [ 00:05:52.531 { 00:05:52.531 "dma_device_id": "system", 00:05:52.531 "dma_device_type": 1 00:05:52.531 }, 00:05:52.531 { 00:05:52.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.531 "dma_device_type": 2 00:05:52.531 } 00:05:52.531 ], 00:05:52.531 "driver_specific": {} 00:05:52.531 }, 00:05:52.531 { 00:05:52.531 "name": "Passthru0", 00:05:52.531 "aliases": [ 00:05:52.531 "d795a48f-7134-5197-8e35-3d2c9ce97da1" 00:05:52.531 ], 00:05:52.531 "product_name": "passthru", 00:05:52.531 "block_size": 512, 00:05:52.531 "num_blocks": 16384, 00:05:52.531 "uuid": "d795a48f-7134-5197-8e35-3d2c9ce97da1", 00:05:52.531 "assigned_rate_limits": { 00:05:52.531 "rw_ios_per_sec": 0, 00:05:52.531 "rw_mbytes_per_sec": 0, 00:05:52.531 "r_mbytes_per_sec": 0, 00:05:52.531 "w_mbytes_per_sec": 0 00:05:52.531 }, 00:05:52.531 "claimed": false, 00:05:52.531 "zoned": false, 00:05:52.531 "supported_io_types": { 00:05:52.531 "read": true, 00:05:52.531 "write": true, 00:05:52.531 "unmap": true, 00:05:52.531 "flush": true, 00:05:52.531 "reset": true, 00:05:52.531 "nvme_admin": false, 00:05:52.531 "nvme_io": false, 00:05:52.531 "nvme_io_md": false, 00:05:52.531 "write_zeroes": true, 00:05:52.531 "zcopy": true, 00:05:52.531 "get_zone_info": false, 00:05:52.531 "zone_management": false, 00:05:52.531 "zone_append": false, 00:05:52.531 "compare": false, 00:05:52.531 "compare_and_write": false, 00:05:52.531 "abort": true, 00:05:52.531 "seek_hole": false, 00:05:52.531 "seek_data": false, 00:05:52.531 "copy": true, 00:05:52.531 "nvme_iov_md": false 00:05:52.531 }, 00:05:52.531 "memory_domains": [ 00:05:52.531 { 00:05:52.531 "dma_device_id": "system", 00:05:52.531 "dma_device_type": 1 00:05:52.531 }, 00:05:52.531 { 00:05:52.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.531 "dma_device_type": 2 00:05:52.531 } 00:05:52.531 ], 00:05:52.531 "driver_specific": { 00:05:52.531 "passthru": { 00:05:52.531 "name": "Passthru0", 00:05:52.531 "base_bdev_name": "Malloc0" 00:05:52.531 } 00:05:52.531 } 00:05:52.531 } 00:05:52.531 ]' 00:05:52.531 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:52.531 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:52.531 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.531 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.531 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.531 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:52.531 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:52.531 10:11:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:52.531 00:05:52.531 real 0m0.224s 00:05:52.531 user 0m0.132s 00:05:52.531 sys 0m0.029s 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.531 10:11:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.531 ************************************ 00:05:52.531 END TEST rpc_integrity 00:05:52.531 ************************************ 00:05:52.531 10:11:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:52.531 10:11:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.531 10:11:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.531 10:11:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.531 ************************************ 00:05:52.531 START TEST rpc_plugins 00:05:52.531 ************************************ 00:05:52.531 10:11:31 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:52.531 10:11:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:52.531 10:11:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.531 10:11:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:52.531 10:11:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.531 10:11:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:52.531 10:11:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:52.531 10:11:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.531 10:11:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:52.531 10:11:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.532 10:11:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:52.532 { 00:05:52.532 "name": "Malloc1", 00:05:52.532 "aliases": [ 00:05:52.532 "9c78876d-fec8-41ed-96d1-7686b53fd327" 00:05:52.532 ], 00:05:52.532 "product_name": "Malloc disk", 00:05:52.532 "block_size": 4096, 00:05:52.532 "num_blocks": 256, 00:05:52.532 "uuid": "9c78876d-fec8-41ed-96d1-7686b53fd327", 00:05:52.532 "assigned_rate_limits": { 00:05:52.532 "rw_ios_per_sec": 0, 00:05:52.532 "rw_mbytes_per_sec": 0, 00:05:52.532 "r_mbytes_per_sec": 0, 00:05:52.532 "w_mbytes_per_sec": 0 00:05:52.532 }, 00:05:52.532 "claimed": false, 00:05:52.532 "zoned": false, 00:05:52.532 "supported_io_types": { 00:05:52.532 "read": true, 00:05:52.532 "write": true, 00:05:52.532 "unmap": true, 00:05:52.532 "flush": true, 00:05:52.532 "reset": true, 00:05:52.532 "nvme_admin": false, 00:05:52.532 "nvme_io": false, 00:05:52.532 "nvme_io_md": false, 00:05:52.532 "write_zeroes": true, 00:05:52.532 "zcopy": true, 00:05:52.532 "get_zone_info": false, 00:05:52.532 "zone_management": false, 00:05:52.532 "zone_append": false, 00:05:52.532 "compare": false, 00:05:52.532 "compare_and_write": false, 00:05:52.532 "abort": true, 00:05:52.532 "seek_hole": false, 00:05:52.532 "seek_data": false, 00:05:52.532 "copy": true, 00:05:52.532 "nvme_iov_md": false 00:05:52.532 }, 00:05:52.532 "memory_domains": [ 00:05:52.532 { 00:05:52.532 "dma_device_id": "system", 00:05:52.532 "dma_device_type": 1 00:05:52.532 }, 00:05:52.532 { 00:05:52.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.532 "dma_device_type": 2 00:05:52.532 } 00:05:52.532 ], 00:05:52.532 "driver_specific": {} 00:05:52.532 } 00:05:52.532 ]' 00:05:52.790 10:11:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:52.790 10:11:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:52.790 10:11:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:52.790 10:11:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.790 10:11:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:52.790 10:11:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.790 10:11:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:52.790 10:11:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.790 10:11:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:52.790 10:11:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.790 10:11:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:52.790 10:11:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:52.790 10:11:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:52.790 00:05:52.790 real 0m0.119s 00:05:52.790 user 0m0.068s 00:05:52.790 sys 0m0.015s 00:05:52.790 10:11:32 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.790 10:11:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:52.790 ************************************ 00:05:52.790 END TEST rpc_plugins 00:05:52.790 ************************************ 00:05:52.790 10:11:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:52.790 10:11:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.790 10:11:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.790 10:11:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.790 ************************************ 00:05:52.790 START TEST rpc_trace_cmd_test 00:05:52.790 ************************************ 00:05:52.790 10:11:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:52.790 10:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:52.790 10:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:52.790 10:11:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.790 10:11:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:52.790 10:11:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.790 10:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:52.790 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69103", 00:05:52.790 "tpoint_group_mask": "0x8", 00:05:52.790 "iscsi_conn": { 00:05:52.790 "mask": "0x2", 00:05:52.790 "tpoint_mask": "0x0" 00:05:52.790 }, 00:05:52.790 "scsi": { 00:05:52.790 "mask": "0x4", 00:05:52.790 "tpoint_mask": "0x0" 00:05:52.790 }, 00:05:52.790 "bdev": { 00:05:52.790 "mask": "0x8", 00:05:52.790 "tpoint_mask": "0xffffffffffffffff" 00:05:52.790 }, 00:05:52.790 "nvmf_rdma": { 00:05:52.790 "mask": "0x10", 00:05:52.790 "tpoint_mask": "0x0" 00:05:52.790 }, 00:05:52.790 "nvmf_tcp": { 00:05:52.790 "mask": "0x20", 00:05:52.790 "tpoint_mask": "0x0" 00:05:52.790 }, 00:05:52.790 "ftl": { 00:05:52.790 "mask": "0x40", 00:05:52.790 "tpoint_mask": "0x0" 00:05:52.790 }, 00:05:52.790 "blobfs": { 00:05:52.790 "mask": "0x80", 00:05:52.790 "tpoint_mask": "0x0" 00:05:52.790 }, 00:05:52.790 "dsa": { 00:05:52.790 "mask": "0x200", 00:05:52.790 "tpoint_mask": "0x0" 00:05:52.790 }, 00:05:52.790 "thread": { 00:05:52.790 "mask": "0x400", 00:05:52.790 "tpoint_mask": "0x0" 00:05:52.790 }, 00:05:52.790 "nvme_pcie": { 00:05:52.790 "mask": "0x800", 00:05:52.790 "tpoint_mask": "0x0" 00:05:52.790 }, 00:05:52.790 "iaa": { 00:05:52.790 "mask": "0x1000", 00:05:52.790 "tpoint_mask": "0x0" 00:05:52.790 }, 00:05:52.790 "nvme_tcp": { 00:05:52.790 "mask": "0x2000", 00:05:52.790 "tpoint_mask": "0x0" 00:05:52.790 }, 00:05:52.790 "bdev_nvme": { 00:05:52.790 "mask": "0x4000", 00:05:52.790 "tpoint_mask": "0x0" 00:05:52.790 }, 00:05:52.790 "sock": { 00:05:52.790 "mask": "0x8000", 00:05:52.790 "tpoint_mask": "0x0" 00:05:52.790 }, 00:05:52.790 "blob": { 00:05:52.790 "mask": "0x10000", 00:05:52.790 "tpoint_mask": "0x0" 00:05:52.790 }, 00:05:52.790 "bdev_raid": { 00:05:52.790 "mask": "0x20000", 00:05:52.790 "tpoint_mask": "0x0" 00:05:52.790 }, 00:05:52.790 "scheduler": { 00:05:52.790 "mask": "0x40000", 00:05:52.790 "tpoint_mask": "0x0" 00:05:52.790 } 00:05:52.790 }' 00:05:52.790 10:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:52.790 10:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:52.790 10:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:52.790 10:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:52.790 10:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:52.790 10:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:52.790 10:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:53.051 10:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:53.051 10:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:53.051 10:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:53.051 00:05:53.051 real 0m0.192s 00:05:53.051 user 0m0.155s 00:05:53.051 sys 0m0.026s 00:05:53.051 10:11:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.051 10:11:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:53.051 ************************************ 00:05:53.051 END TEST rpc_trace_cmd_test 00:05:53.051 ************************************ 00:05:53.051 10:11:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:53.051 10:11:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:53.051 10:11:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:53.051 10:11:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.051 10:11:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.051 10:11:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.051 ************************************ 00:05:53.051 START TEST rpc_daemon_integrity 00:05:53.051 ************************************ 00:05:53.051 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:53.052 { 00:05:53.052 "name": "Malloc2", 00:05:53.052 "aliases": [ 00:05:53.052 "19b9dc15-fb93-43ca-bf02-5698c600ec56" 00:05:53.052 ], 00:05:53.052 "product_name": "Malloc disk", 00:05:53.052 "block_size": 512, 00:05:53.052 "num_blocks": 16384, 00:05:53.052 "uuid": "19b9dc15-fb93-43ca-bf02-5698c600ec56", 00:05:53.052 "assigned_rate_limits": { 00:05:53.052 "rw_ios_per_sec": 0, 00:05:53.052 "rw_mbytes_per_sec": 0, 00:05:53.052 "r_mbytes_per_sec": 0, 00:05:53.052 "w_mbytes_per_sec": 0 00:05:53.052 }, 00:05:53.052 "claimed": false, 00:05:53.052 "zoned": false, 00:05:53.052 "supported_io_types": { 00:05:53.052 "read": true, 00:05:53.052 "write": true, 00:05:53.052 "unmap": true, 00:05:53.052 "flush": true, 00:05:53.052 "reset": true, 00:05:53.052 "nvme_admin": false, 00:05:53.052 "nvme_io": false, 00:05:53.052 "nvme_io_md": false, 00:05:53.052 "write_zeroes": true, 00:05:53.052 "zcopy": true, 00:05:53.052 "get_zone_info": false, 00:05:53.052 "zone_management": false, 00:05:53.052 "zone_append": false, 00:05:53.052 "compare": false, 00:05:53.052 "compare_and_write": false, 00:05:53.052 "abort": true, 00:05:53.052 "seek_hole": false, 00:05:53.052 "seek_data": false, 00:05:53.052 "copy": true, 00:05:53.052 "nvme_iov_md": false 00:05:53.052 }, 00:05:53.052 "memory_domains": [ 00:05:53.052 { 00:05:53.052 "dma_device_id": "system", 00:05:53.052 "dma_device_type": 1 00:05:53.052 }, 00:05:53.052 { 00:05:53.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.052 "dma_device_type": 2 00:05:53.052 } 00:05:53.052 ], 00:05:53.052 "driver_specific": {} 00:05:53.052 } 00:05:53.052 ]' 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.052 [2024-11-29 10:11:32.457534] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:53.052 [2024-11-29 10:11:32.457584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:53.052 [2024-11-29 10:11:32.457604] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:53.052 [2024-11-29 10:11:32.457613] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:53.052 [2024-11-29 10:11:32.459770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:53.052 [2024-11-29 10:11:32.459815] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:53.052 Passthru0 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:53.052 { 00:05:53.052 "name": "Malloc2", 00:05:53.052 "aliases": [ 00:05:53.052 "19b9dc15-fb93-43ca-bf02-5698c600ec56" 00:05:53.052 ], 00:05:53.052 "product_name": "Malloc disk", 00:05:53.052 "block_size": 512, 00:05:53.052 "num_blocks": 16384, 00:05:53.052 "uuid": "19b9dc15-fb93-43ca-bf02-5698c600ec56", 00:05:53.052 "assigned_rate_limits": { 00:05:53.052 "rw_ios_per_sec": 0, 00:05:53.052 "rw_mbytes_per_sec": 0, 00:05:53.052 "r_mbytes_per_sec": 0, 00:05:53.052 "w_mbytes_per_sec": 0 00:05:53.052 }, 00:05:53.052 "claimed": true, 00:05:53.052 "claim_type": "exclusive_write", 00:05:53.052 "zoned": false, 00:05:53.052 "supported_io_types": { 00:05:53.052 "read": true, 00:05:53.052 "write": true, 00:05:53.052 "unmap": true, 00:05:53.052 "flush": true, 00:05:53.052 "reset": true, 00:05:53.052 "nvme_admin": false, 00:05:53.052 "nvme_io": false, 00:05:53.052 "nvme_io_md": false, 00:05:53.052 "write_zeroes": true, 00:05:53.052 "zcopy": true, 00:05:53.052 "get_zone_info": false, 00:05:53.052 "zone_management": false, 00:05:53.052 "zone_append": false, 00:05:53.052 "compare": false, 00:05:53.052 "compare_and_write": false, 00:05:53.052 "abort": true, 00:05:53.052 "seek_hole": false, 00:05:53.052 "seek_data": false, 00:05:53.052 "copy": true, 00:05:53.052 "nvme_iov_md": false 00:05:53.052 }, 00:05:53.052 "memory_domains": [ 00:05:53.052 { 00:05:53.052 "dma_device_id": "system", 00:05:53.052 "dma_device_type": 1 00:05:53.052 }, 00:05:53.052 { 00:05:53.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.052 "dma_device_type": 2 00:05:53.052 } 00:05:53.052 ], 00:05:53.052 "driver_specific": {} 00:05:53.052 }, 00:05:53.052 { 00:05:53.052 "name": "Passthru0", 00:05:53.052 "aliases": [ 00:05:53.052 "a3752ac9-4c1e-5b65-8d19-8b05909f6bcb" 00:05:53.052 ], 00:05:53.052 "product_name": "passthru", 00:05:53.052 "block_size": 512, 00:05:53.052 "num_blocks": 16384, 00:05:53.052 "uuid": "a3752ac9-4c1e-5b65-8d19-8b05909f6bcb", 00:05:53.052 "assigned_rate_limits": { 00:05:53.052 "rw_ios_per_sec": 0, 00:05:53.052 "rw_mbytes_per_sec": 0, 00:05:53.052 "r_mbytes_per_sec": 0, 00:05:53.052 "w_mbytes_per_sec": 0 00:05:53.052 }, 00:05:53.052 "claimed": false, 00:05:53.052 "zoned": false, 00:05:53.052 "supported_io_types": { 00:05:53.052 "read": true, 00:05:53.052 "write": true, 00:05:53.052 "unmap": true, 00:05:53.052 "flush": true, 00:05:53.052 "reset": true, 00:05:53.052 "nvme_admin": false, 00:05:53.052 "nvme_io": false, 00:05:53.052 "nvme_io_md": false, 00:05:53.052 "write_zeroes": true, 00:05:53.052 "zcopy": true, 00:05:53.052 "get_zone_info": false, 00:05:53.052 "zone_management": false, 00:05:53.052 "zone_append": false, 00:05:53.052 "compare": false, 00:05:53.052 "compare_and_write": false, 00:05:53.052 "abort": true, 00:05:53.052 "seek_hole": false, 00:05:53.052 "seek_data": false, 00:05:53.052 "copy": true, 00:05:53.052 "nvme_iov_md": false 00:05:53.052 }, 00:05:53.052 "memory_domains": [ 00:05:53.052 { 00:05:53.052 "dma_device_id": "system", 00:05:53.052 "dma_device_type": 1 00:05:53.052 }, 00:05:53.052 { 00:05:53.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.052 "dma_device_type": 2 00:05:53.052 } 00:05:53.052 ], 00:05:53.052 "driver_specific": { 00:05:53.052 "passthru": { 00:05:53.052 "name": "Passthru0", 00:05:53.052 "base_bdev_name": "Malloc2" 00:05:53.052 } 00:05:53.052 } 00:05:53.052 } 00:05:53.052 ]' 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.052 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.394 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.394 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:53.394 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.394 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.394 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.394 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:53.394 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.394 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.394 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.394 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:53.394 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:53.394 10:11:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:53.394 00:05:53.394 real 0m0.214s 00:05:53.394 user 0m0.124s 00:05:53.394 sys 0m0.023s 00:05:53.394 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.394 10:11:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.394 ************************************ 00:05:53.394 END TEST rpc_daemon_integrity 00:05:53.394 ************************************ 00:05:53.394 10:11:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:53.394 10:11:32 rpc -- rpc/rpc.sh@84 -- # killprocess 69103 00:05:53.394 10:11:32 rpc -- common/autotest_common.sh@954 -- # '[' -z 69103 ']' 00:05:53.394 10:11:32 rpc -- common/autotest_common.sh@958 -- # kill -0 69103 00:05:53.394 10:11:32 rpc -- common/autotest_common.sh@959 -- # uname 00:05:53.394 10:11:32 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.394 10:11:32 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69103 00:05:53.394 10:11:32 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.394 10:11:32 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.394 killing process with pid 69103 00:05:53.394 10:11:32 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69103' 00:05:53.394 10:11:32 rpc -- common/autotest_common.sh@973 -- # kill 69103 00:05:53.394 10:11:32 rpc -- common/autotest_common.sh@978 -- # wait 69103 00:05:53.653 00:05:53.653 real 0m2.206s 00:05:53.653 user 0m2.683s 00:05:53.653 sys 0m0.536s 00:05:53.653 10:11:32 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.653 10:11:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.653 ************************************ 00:05:53.653 END TEST rpc 00:05:53.653 ************************************ 00:05:53.653 10:11:32 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:53.653 10:11:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.653 10:11:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.653 10:11:32 -- common/autotest_common.sh@10 -- # set +x 00:05:53.653 ************************************ 00:05:53.653 START TEST skip_rpc 00:05:53.653 ************************************ 00:05:53.654 10:11:32 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:53.654 * Looking for test storage... 00:05:53.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:53.654 10:11:32 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.654 10:11:32 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.654 10:11:32 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.654 10:11:33 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.654 10:11:33 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:53.654 10:11:33 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.654 10:11:33 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.654 --rc genhtml_branch_coverage=1 00:05:53.654 --rc genhtml_function_coverage=1 00:05:53.654 --rc genhtml_legend=1 00:05:53.654 --rc geninfo_all_blocks=1 00:05:53.654 --rc geninfo_unexecuted_blocks=1 00:05:53.654 00:05:53.654 ' 00:05:53.654 10:11:33 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.654 --rc genhtml_branch_coverage=1 00:05:53.654 --rc genhtml_function_coverage=1 00:05:53.654 --rc genhtml_legend=1 00:05:53.654 --rc geninfo_all_blocks=1 00:05:53.654 --rc geninfo_unexecuted_blocks=1 00:05:53.654 00:05:53.654 ' 00:05:53.654 10:11:33 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.654 --rc genhtml_branch_coverage=1 00:05:53.654 --rc genhtml_function_coverage=1 00:05:53.654 --rc genhtml_legend=1 00:05:53.654 --rc geninfo_all_blocks=1 00:05:53.654 --rc geninfo_unexecuted_blocks=1 00:05:53.654 00:05:53.654 ' 00:05:53.654 10:11:33 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.654 --rc genhtml_branch_coverage=1 00:05:53.654 --rc genhtml_function_coverage=1 00:05:53.654 --rc genhtml_legend=1 00:05:53.654 --rc geninfo_all_blocks=1 00:05:53.654 --rc geninfo_unexecuted_blocks=1 00:05:53.654 00:05:53.654 ' 00:05:53.654 10:11:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:53.654 10:11:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:53.654 10:11:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:53.654 10:11:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.654 10:11:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.654 10:11:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.654 ************************************ 00:05:53.654 START TEST skip_rpc 00:05:53.654 ************************************ 00:05:53.654 10:11:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:53.654 10:11:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69304 00:05:53.654 10:11:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.654 10:11:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:53.654 10:11:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:53.654 [2024-11-29 10:11:33.100395] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:53.654 [2024-11-29 10:11:33.100680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69304 ] 00:05:53.913 [2024-11-29 10:11:33.253562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.913 [2024-11-29 10:11:33.272638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69304 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69304 ']' 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69304 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69304 00:05:59.178 killing process with pid 69304 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69304' 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69304 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69304 00:05:59.178 00:05:59.178 real 0m5.255s 00:05:59.178 user 0m4.914s 00:05:59.178 sys 0m0.239s 00:05:59.178 ************************************ 00:05:59.178 END TEST skip_rpc 00:05:59.178 ************************************ 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.178 10:11:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.178 10:11:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:59.178 10:11:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.178 10:11:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.178 10:11:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.178 ************************************ 00:05:59.178 START TEST skip_rpc_with_json 00:05:59.178 ************************************ 00:05:59.178 10:11:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:59.178 10:11:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:59.178 10:11:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69386 00:05:59.178 10:11:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.178 10:11:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69386 00:05:59.178 10:11:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.178 10:11:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69386 ']' 00:05:59.178 10:11:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.178 10:11:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.178 10:11:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.178 10:11:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.178 10:11:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.178 [2024-11-29 10:11:38.417175] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:59.178 [2024-11-29 10:11:38.417446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69386 ] 00:05:59.178 [2024-11-29 10:11:38.548335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.178 [2024-11-29 10:11:38.564320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.112 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.112 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:00.112 10:11:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:00.112 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.112 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.112 [2024-11-29 10:11:39.212378] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:00.112 request: 00:06:00.112 { 00:06:00.112 "trtype": "tcp", 00:06:00.112 "method": "nvmf_get_transports", 00:06:00.112 "req_id": 1 00:06:00.112 } 00:06:00.112 Got JSON-RPC error response 00:06:00.112 response: 00:06:00.112 { 00:06:00.112 "code": -19, 00:06:00.112 "message": "No such device" 00:06:00.112 } 00:06:00.112 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:00.112 10:11:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:00.112 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.112 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.112 [2024-11-29 10:11:39.220471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.112 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.112 10:11:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:00.112 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.112 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.112 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.112 10:11:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:00.112 { 00:06:00.112 "subsystems": [ 00:06:00.112 { 00:06:00.112 "subsystem": "fsdev", 00:06:00.112 "config": [ 00:06:00.112 { 00:06:00.112 "method": "fsdev_set_opts", 00:06:00.112 "params": { 00:06:00.112 "fsdev_io_pool_size": 65535, 00:06:00.112 "fsdev_io_cache_size": 256 00:06:00.112 } 00:06:00.112 } 00:06:00.112 ] 00:06:00.112 }, 00:06:00.112 { 00:06:00.112 "subsystem": "keyring", 00:06:00.112 "config": [] 00:06:00.112 }, 00:06:00.112 { 00:06:00.112 "subsystem": "iobuf", 00:06:00.112 "config": [ 00:06:00.112 { 00:06:00.112 "method": "iobuf_set_options", 00:06:00.112 "params": { 00:06:00.112 "small_pool_count": 8192, 00:06:00.112 "large_pool_count": 1024, 00:06:00.112 "small_bufsize": 8192, 00:06:00.112 "large_bufsize": 135168, 00:06:00.112 "enable_numa": false 00:06:00.112 } 00:06:00.112 } 00:06:00.112 ] 00:06:00.112 }, 00:06:00.112 { 00:06:00.112 "subsystem": "sock", 00:06:00.112 "config": [ 00:06:00.112 { 00:06:00.112 "method": "sock_set_default_impl", 00:06:00.112 "params": { 00:06:00.112 "impl_name": "posix" 00:06:00.112 } 00:06:00.112 }, 00:06:00.112 { 00:06:00.112 "method": "sock_impl_set_options", 00:06:00.112 "params": { 00:06:00.112 "impl_name": "ssl", 00:06:00.112 "recv_buf_size": 4096, 00:06:00.112 "send_buf_size": 4096, 00:06:00.112 "enable_recv_pipe": true, 00:06:00.112 "enable_quickack": false, 00:06:00.112 "enable_placement_id": 0, 00:06:00.112 "enable_zerocopy_send_server": true, 00:06:00.112 "enable_zerocopy_send_client": false, 00:06:00.112 "zerocopy_threshold": 0, 00:06:00.112 "tls_version": 0, 00:06:00.112 "enable_ktls": false 00:06:00.112 } 00:06:00.112 }, 00:06:00.112 { 00:06:00.112 "method": "sock_impl_set_options", 00:06:00.112 "params": { 00:06:00.112 "impl_name": "posix", 00:06:00.112 "recv_buf_size": 2097152, 00:06:00.112 "send_buf_size": 2097152, 00:06:00.112 "enable_recv_pipe": true, 00:06:00.112 "enable_quickack": false, 00:06:00.112 "enable_placement_id": 0, 00:06:00.112 "enable_zerocopy_send_server": true, 00:06:00.112 "enable_zerocopy_send_client": false, 00:06:00.112 "zerocopy_threshold": 0, 00:06:00.112 "tls_version": 0, 00:06:00.112 "enable_ktls": false 00:06:00.112 } 00:06:00.112 } 00:06:00.112 ] 00:06:00.112 }, 00:06:00.112 { 00:06:00.112 "subsystem": "vmd", 00:06:00.112 "config": [] 00:06:00.112 }, 00:06:00.112 { 00:06:00.112 "subsystem": "accel", 00:06:00.112 "config": [ 00:06:00.112 { 00:06:00.112 "method": "accel_set_options", 00:06:00.112 "params": { 00:06:00.112 "small_cache_size": 128, 00:06:00.112 "large_cache_size": 16, 00:06:00.112 "task_count": 2048, 00:06:00.112 "sequence_count": 2048, 00:06:00.112 "buf_count": 2048 00:06:00.112 } 00:06:00.112 } 00:06:00.112 ] 00:06:00.112 }, 00:06:00.112 { 00:06:00.112 "subsystem": "bdev", 00:06:00.112 "config": [ 00:06:00.112 { 00:06:00.112 "method": "bdev_set_options", 00:06:00.112 "params": { 00:06:00.112 "bdev_io_pool_size": 65535, 00:06:00.112 "bdev_io_cache_size": 256, 00:06:00.112 "bdev_auto_examine": true, 00:06:00.112 "iobuf_small_cache_size": 128, 00:06:00.112 "iobuf_large_cache_size": 16 00:06:00.112 } 00:06:00.112 }, 00:06:00.112 { 00:06:00.112 "method": "bdev_raid_set_options", 00:06:00.112 "params": { 00:06:00.112 "process_window_size_kb": 1024, 00:06:00.112 "process_max_bandwidth_mb_sec": 0 00:06:00.112 } 00:06:00.112 }, 00:06:00.112 { 00:06:00.112 "method": "bdev_iscsi_set_options", 00:06:00.112 "params": { 00:06:00.112 "timeout_sec": 30 00:06:00.112 } 00:06:00.112 }, 00:06:00.112 { 00:06:00.112 "method": "bdev_nvme_set_options", 00:06:00.112 "params": { 00:06:00.112 "action_on_timeout": "none", 00:06:00.112 "timeout_us": 0, 00:06:00.112 "timeout_admin_us": 0, 00:06:00.112 "keep_alive_timeout_ms": 10000, 00:06:00.112 "arbitration_burst": 0, 00:06:00.112 "low_priority_weight": 0, 00:06:00.112 "medium_priority_weight": 0, 00:06:00.112 "high_priority_weight": 0, 00:06:00.112 "nvme_adminq_poll_period_us": 10000, 00:06:00.112 "nvme_ioq_poll_period_us": 0, 00:06:00.112 "io_queue_requests": 0, 00:06:00.112 "delay_cmd_submit": true, 00:06:00.112 "transport_retry_count": 4, 00:06:00.112 "bdev_retry_count": 3, 00:06:00.112 "transport_ack_timeout": 0, 00:06:00.112 "ctrlr_loss_timeout_sec": 0, 00:06:00.112 "reconnect_delay_sec": 0, 00:06:00.112 "fast_io_fail_timeout_sec": 0, 00:06:00.112 "disable_auto_failback": false, 00:06:00.112 "generate_uuids": false, 00:06:00.112 "transport_tos": 0, 00:06:00.112 "nvme_error_stat": false, 00:06:00.112 "rdma_srq_size": 0, 00:06:00.112 "io_path_stat": false, 00:06:00.112 "allow_accel_sequence": false, 00:06:00.112 "rdma_max_cq_size": 0, 00:06:00.112 "rdma_cm_event_timeout_ms": 0, 00:06:00.112 "dhchap_digests": [ 00:06:00.112 "sha256", 00:06:00.112 "sha384", 00:06:00.113 "sha512" 00:06:00.113 ], 00:06:00.113 "dhchap_dhgroups": [ 00:06:00.113 "null", 00:06:00.113 "ffdhe2048", 00:06:00.113 "ffdhe3072", 00:06:00.113 "ffdhe4096", 00:06:00.113 "ffdhe6144", 00:06:00.113 "ffdhe8192" 00:06:00.113 ] 00:06:00.113 } 00:06:00.113 }, 00:06:00.113 { 00:06:00.113 "method": "bdev_nvme_set_hotplug", 00:06:00.113 "params": { 00:06:00.113 "period_us": 100000, 00:06:00.113 "enable": false 00:06:00.113 } 00:06:00.113 }, 00:06:00.113 { 00:06:00.113 "method": "bdev_wait_for_examine" 00:06:00.113 } 00:06:00.113 ] 00:06:00.113 }, 00:06:00.113 { 00:06:00.113 "subsystem": "scsi", 00:06:00.113 "config": null 00:06:00.113 }, 00:06:00.113 { 00:06:00.113 "subsystem": "scheduler", 00:06:00.113 "config": [ 00:06:00.113 { 00:06:00.113 "method": "framework_set_scheduler", 00:06:00.113 "params": { 00:06:00.113 "name": "static" 00:06:00.113 } 00:06:00.113 } 00:06:00.113 ] 00:06:00.113 }, 00:06:00.113 { 00:06:00.113 "subsystem": "vhost_scsi", 00:06:00.113 "config": [] 00:06:00.113 }, 00:06:00.113 { 00:06:00.113 "subsystem": "vhost_blk", 00:06:00.113 "config": [] 00:06:00.113 }, 00:06:00.113 { 00:06:00.113 "subsystem": "ublk", 00:06:00.113 "config": [] 00:06:00.113 }, 00:06:00.113 { 00:06:00.113 "subsystem": "nbd", 00:06:00.113 "config": [] 00:06:00.113 }, 00:06:00.113 { 00:06:00.113 "subsystem": "nvmf", 00:06:00.113 "config": [ 00:06:00.113 { 00:06:00.113 "method": "nvmf_set_config", 00:06:00.113 "params": { 00:06:00.113 "discovery_filter": "match_any", 00:06:00.113 "admin_cmd_passthru": { 00:06:00.113 "identify_ctrlr": false 00:06:00.113 }, 00:06:00.113 "dhchap_digests": [ 00:06:00.113 "sha256", 00:06:00.113 "sha384", 00:06:00.113 "sha512" 00:06:00.113 ], 00:06:00.113 "dhchap_dhgroups": [ 00:06:00.113 "null", 00:06:00.113 "ffdhe2048", 00:06:00.113 "ffdhe3072", 00:06:00.113 "ffdhe4096", 00:06:00.113 "ffdhe6144", 00:06:00.113 "ffdhe8192" 00:06:00.113 ] 00:06:00.113 } 00:06:00.113 }, 00:06:00.113 { 00:06:00.113 "method": "nvmf_set_max_subsystems", 00:06:00.113 "params": { 00:06:00.113 "max_subsystems": 1024 00:06:00.113 } 00:06:00.113 }, 00:06:00.113 { 00:06:00.113 "method": "nvmf_set_crdt", 00:06:00.113 "params": { 00:06:00.113 "crdt1": 0, 00:06:00.113 "crdt2": 0, 00:06:00.113 "crdt3": 0 00:06:00.113 } 00:06:00.113 }, 00:06:00.113 { 00:06:00.113 "method": "nvmf_create_transport", 00:06:00.113 "params": { 00:06:00.113 "trtype": "TCP", 00:06:00.113 "max_queue_depth": 128, 00:06:00.113 "max_io_qpairs_per_ctrlr": 127, 00:06:00.113 "in_capsule_data_size": 4096, 00:06:00.113 "max_io_size": 131072, 00:06:00.113 "io_unit_size": 131072, 00:06:00.113 "max_aq_depth": 128, 00:06:00.113 "num_shared_buffers": 511, 00:06:00.113 "buf_cache_size": 4294967295, 00:06:00.113 "dif_insert_or_strip": false, 00:06:00.113 "zcopy": false, 00:06:00.113 "c2h_success": true, 00:06:00.113 "sock_priority": 0, 00:06:00.113 "abort_timeout_sec": 1, 00:06:00.113 "ack_timeout": 0, 00:06:00.113 "data_wr_pool_size": 0 00:06:00.113 } 00:06:00.113 } 00:06:00.113 ] 00:06:00.113 }, 00:06:00.113 { 00:06:00.113 "subsystem": "iscsi", 00:06:00.113 "config": [ 00:06:00.113 { 00:06:00.113 "method": "iscsi_set_options", 00:06:00.113 "params": { 00:06:00.113 "node_base": "iqn.2016-06.io.spdk", 00:06:00.113 "max_sessions": 128, 00:06:00.113 "max_connections_per_session": 2, 00:06:00.113 "max_queue_depth": 64, 00:06:00.113 "default_time2wait": 2, 00:06:00.113 "default_time2retain": 20, 00:06:00.113 "first_burst_length": 8192, 00:06:00.113 "immediate_data": true, 00:06:00.113 "allow_duplicated_isid": false, 00:06:00.113 "error_recovery_level": 0, 00:06:00.113 "nop_timeout": 60, 00:06:00.113 "nop_in_interval": 30, 00:06:00.113 "disable_chap": false, 00:06:00.113 "require_chap": false, 00:06:00.113 "mutual_chap": false, 00:06:00.113 "chap_group": 0, 00:06:00.113 "max_large_datain_per_connection": 64, 00:06:00.113 "max_r2t_per_connection": 4, 00:06:00.113 "pdu_pool_size": 36864, 00:06:00.113 "immediate_data_pool_size": 16384, 00:06:00.113 "data_out_pool_size": 2048 00:06:00.113 } 00:06:00.113 } 00:06:00.113 ] 00:06:00.113 } 00:06:00.113 ] 00:06:00.113 } 00:06:00.113 10:11:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:00.113 10:11:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69386 00:06:00.113 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69386 ']' 00:06:00.113 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69386 00:06:00.113 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:00.113 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.113 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69386 00:06:00.113 killing process with pid 69386 00:06:00.113 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.113 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.113 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69386' 00:06:00.113 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69386 00:06:00.113 10:11:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69386 00:06:00.371 10:11:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69409 00:06:00.371 10:11:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:00.371 10:11:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:05.633 10:11:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69409 00:06:05.633 10:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69409 ']' 00:06:05.633 10:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69409 00:06:05.633 10:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:05.633 10:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.633 10:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69409 00:06:05.633 killing process with pid 69409 00:06:05.633 10:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.633 10:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.633 10:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69409' 00:06:05.633 10:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69409 00:06:05.633 10:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69409 00:06:05.633 10:11:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:05.633 10:11:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:05.633 ************************************ 00:06:05.633 END TEST skip_rpc_with_json 00:06:05.633 ************************************ 00:06:05.633 00:06:05.633 real 0m6.538s 00:06:05.633 user 0m6.229s 00:06:05.633 sys 0m0.494s 00:06:05.633 10:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.633 10:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:05.633 10:11:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:05.633 10:11:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.634 10:11:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.634 10:11:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.634 ************************************ 00:06:05.634 START TEST skip_rpc_with_delay 00:06:05.634 ************************************ 00:06:05.634 10:11:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:05.634 10:11:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:05.634 10:11:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:05.634 10:11:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:05.634 10:11:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.634 10:11:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.634 10:11:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.634 10:11:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.634 10:11:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.634 10:11:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.634 10:11:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.634 10:11:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:05.634 10:11:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:05.634 [2024-11-29 10:11:45.010708] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:05.634 10:11:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:05.634 10:11:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.634 10:11:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.634 10:11:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.634 00:06:05.634 real 0m0.114s 00:06:05.634 user 0m0.063s 00:06:05.634 sys 0m0.050s 00:06:05.634 ************************************ 00:06:05.634 END TEST skip_rpc_with_delay 00:06:05.634 ************************************ 00:06:05.634 10:11:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.634 10:11:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:05.634 10:11:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:05.893 10:11:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:05.894 10:11:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:05.894 10:11:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.894 10:11:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.894 10:11:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.894 ************************************ 00:06:05.894 START TEST exit_on_failed_rpc_init 00:06:05.894 ************************************ 00:06:05.894 10:11:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:05.894 10:11:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69521 00:06:05.894 10:11:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69521 00:06:05.894 10:11:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69521 ']' 00:06:05.894 10:11:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.894 10:11:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.894 10:11:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.894 10:11:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.894 10:11:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:05.894 10:11:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.894 [2024-11-29 10:11:45.180117] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:05.894 [2024-11-29 10:11:45.180224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69521 ] 00:06:05.894 [2024-11-29 10:11:45.319576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.894 [2024-11-29 10:11:45.336397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.829 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.829 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:06.829 10:11:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.829 10:11:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:06.829 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:06.829 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:06.829 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.829 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.829 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.829 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.829 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.829 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.829 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.829 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:06.829 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:06.829 [2024-11-29 10:11:46.099043] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:06.829 [2024-11-29 10:11:46.099158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69539 ] 00:06:06.830 [2024-11-29 10:11:46.241726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.830 [2024-11-29 10:11:46.261633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.830 [2024-11-29 10:11:46.261710] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:06.830 [2024-11-29 10:11:46.261725] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:06.830 [2024-11-29 10:11:46.261737] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69521 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69521 ']' 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69521 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69521 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.088 killing process with pid 69521 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69521' 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69521 00:06:07.088 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69521 00:06:07.348 00:06:07.348 real 0m1.472s 00:06:07.348 user 0m1.635s 00:06:07.348 sys 0m0.351s 00:06:07.348 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.348 10:11:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:07.348 ************************************ 00:06:07.348 END TEST exit_on_failed_rpc_init 00:06:07.348 ************************************ 00:06:07.348 10:11:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:07.348 00:06:07.348 real 0m13.725s 00:06:07.348 user 0m12.967s 00:06:07.348 sys 0m1.309s 00:06:07.348 10:11:46 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.348 ************************************ 00:06:07.348 END TEST skip_rpc 00:06:07.348 ************************************ 00:06:07.348 10:11:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.348 10:11:46 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:07.348 10:11:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.348 10:11:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.348 10:11:46 -- common/autotest_common.sh@10 -- # set +x 00:06:07.348 ************************************ 00:06:07.348 START TEST rpc_client 00:06:07.348 ************************************ 00:06:07.348 10:11:46 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:07.348 * Looking for test storage... 00:06:07.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:07.348 10:11:46 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.348 10:11:46 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.348 10:11:46 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.348 10:11:46 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.348 10:11:46 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.607 10:11:46 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:07.607 10:11:46 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.607 10:11:46 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.607 --rc genhtml_branch_coverage=1 00:06:07.607 --rc genhtml_function_coverage=1 00:06:07.607 --rc genhtml_legend=1 00:06:07.607 --rc geninfo_all_blocks=1 00:06:07.607 --rc geninfo_unexecuted_blocks=1 00:06:07.607 00:06:07.607 ' 00:06:07.607 10:11:46 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.607 --rc genhtml_branch_coverage=1 00:06:07.607 --rc genhtml_function_coverage=1 00:06:07.607 --rc genhtml_legend=1 00:06:07.607 --rc geninfo_all_blocks=1 00:06:07.607 --rc geninfo_unexecuted_blocks=1 00:06:07.607 00:06:07.607 ' 00:06:07.607 10:11:46 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.607 --rc genhtml_branch_coverage=1 00:06:07.607 --rc genhtml_function_coverage=1 00:06:07.607 --rc genhtml_legend=1 00:06:07.607 --rc geninfo_all_blocks=1 00:06:07.607 --rc geninfo_unexecuted_blocks=1 00:06:07.607 00:06:07.607 ' 00:06:07.607 10:11:46 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.607 --rc genhtml_branch_coverage=1 00:06:07.608 --rc genhtml_function_coverage=1 00:06:07.608 --rc genhtml_legend=1 00:06:07.608 --rc geninfo_all_blocks=1 00:06:07.608 --rc geninfo_unexecuted_blocks=1 00:06:07.608 00:06:07.608 ' 00:06:07.608 10:11:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:07.608 OK 00:06:07.608 10:11:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:07.608 00:06:07.608 real 0m0.184s 00:06:07.608 user 0m0.092s 00:06:07.608 sys 0m0.093s 00:06:07.608 10:11:46 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.608 10:11:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:07.608 ************************************ 00:06:07.608 END TEST rpc_client 00:06:07.608 ************************************ 00:06:07.608 10:11:46 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:07.608 10:11:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.608 10:11:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.608 10:11:46 -- common/autotest_common.sh@10 -- # set +x 00:06:07.608 ************************************ 00:06:07.608 START TEST json_config 00:06:07.608 ************************************ 00:06:07.608 10:11:46 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:07.608 10:11:46 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.608 10:11:46 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.608 10:11:46 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.608 10:11:46 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.608 10:11:46 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.608 10:11:46 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.608 10:11:46 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.608 10:11:46 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.608 10:11:46 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.608 10:11:46 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.608 10:11:46 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.608 10:11:46 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.608 10:11:46 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.608 10:11:46 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.608 10:11:46 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.608 10:11:46 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:07.608 10:11:46 json_config -- scripts/common.sh@345 -- # : 1 00:06:07.608 10:11:46 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.608 10:11:46 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.608 10:11:46 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:07.608 10:11:46 json_config -- scripts/common.sh@353 -- # local d=1 00:06:07.608 10:11:46 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.608 10:11:46 json_config -- scripts/common.sh@355 -- # echo 1 00:06:07.608 10:11:46 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.608 10:11:46 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:07.608 10:11:46 json_config -- scripts/common.sh@353 -- # local d=2 00:06:07.608 10:11:46 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.608 10:11:46 json_config -- scripts/common.sh@355 -- # echo 2 00:06:07.608 10:11:46 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.608 10:11:46 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.608 10:11:46 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.608 10:11:46 json_config -- scripts/common.sh@368 -- # return 0 00:06:07.608 10:11:46 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.608 10:11:46 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.608 --rc genhtml_branch_coverage=1 00:06:07.608 --rc genhtml_function_coverage=1 00:06:07.608 --rc genhtml_legend=1 00:06:07.608 --rc geninfo_all_blocks=1 00:06:07.608 --rc geninfo_unexecuted_blocks=1 00:06:07.608 00:06:07.608 ' 00:06:07.608 10:11:46 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.608 --rc genhtml_branch_coverage=1 00:06:07.608 --rc genhtml_function_coverage=1 00:06:07.608 --rc genhtml_legend=1 00:06:07.608 --rc geninfo_all_blocks=1 00:06:07.608 --rc geninfo_unexecuted_blocks=1 00:06:07.608 00:06:07.608 ' 00:06:07.608 10:11:46 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.608 --rc genhtml_branch_coverage=1 00:06:07.608 --rc genhtml_function_coverage=1 00:06:07.608 --rc genhtml_legend=1 00:06:07.608 --rc geninfo_all_blocks=1 00:06:07.608 --rc geninfo_unexecuted_blocks=1 00:06:07.608 00:06:07.608 ' 00:06:07.608 10:11:46 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.608 --rc genhtml_branch_coverage=1 00:06:07.608 --rc genhtml_function_coverage=1 00:06:07.608 --rc genhtml_legend=1 00:06:07.608 --rc geninfo_all_blocks=1 00:06:07.608 --rc geninfo_unexecuted_blocks=1 00:06:07.608 00:06:07.608 ' 00:06:07.608 10:11:46 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3994a1af-dd19-4228-ab77-2da8b76d5ca6 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=3994a1af-dd19-4228-ab77-2da8b76d5ca6 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:07.608 10:11:46 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:07.608 10:11:46 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.608 10:11:46 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.608 10:11:46 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.608 10:11:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.608 10:11:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.608 10:11:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.608 10:11:46 json_config -- paths/export.sh@5 -- # export PATH 00:06:07.608 10:11:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@51 -- # : 0 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:07.608 10:11:46 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:07.608 10:11:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.608 10:11:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.608 10:11:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.608 10:11:47 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:07.608 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:07.608 10:11:47 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:07.608 10:11:47 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:07.608 10:11:47 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:07.609 10:11:47 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:07.609 10:11:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:07.609 10:11:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:07.609 10:11:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:07.609 10:11:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:07.609 WARNING: No tests are enabled so not running JSON configuration tests 00:06:07.609 10:11:47 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:07.609 10:11:47 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:07.609 00:06:07.609 real 0m0.128s 00:06:07.609 user 0m0.095s 00:06:07.609 sys 0m0.039s 00:06:07.609 10:11:47 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.609 10:11:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.609 ************************************ 00:06:07.609 END TEST json_config 00:06:07.609 ************************************ 00:06:07.609 10:11:47 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:07.609 10:11:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.609 10:11:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.609 10:11:47 -- common/autotest_common.sh@10 -- # set +x 00:06:07.609 ************************************ 00:06:07.609 START TEST json_config_extra_key 00:06:07.609 ************************************ 00:06:07.609 10:11:47 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:07.978 10:11:47 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.978 10:11:47 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.978 10:11:47 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.978 10:11:47 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.978 10:11:47 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.978 10:11:47 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.978 10:11:47 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.978 10:11:47 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.978 10:11:47 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.978 10:11:47 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.978 10:11:47 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.978 10:11:47 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:07.979 10:11:47 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.979 10:11:47 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.979 --rc genhtml_branch_coverage=1 00:06:07.979 --rc genhtml_function_coverage=1 00:06:07.979 --rc genhtml_legend=1 00:06:07.979 --rc geninfo_all_blocks=1 00:06:07.979 --rc geninfo_unexecuted_blocks=1 00:06:07.979 00:06:07.979 ' 00:06:07.979 10:11:47 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.979 --rc genhtml_branch_coverage=1 00:06:07.979 --rc genhtml_function_coverage=1 00:06:07.979 --rc genhtml_legend=1 00:06:07.979 --rc geninfo_all_blocks=1 00:06:07.979 --rc geninfo_unexecuted_blocks=1 00:06:07.979 00:06:07.979 ' 00:06:07.979 10:11:47 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.979 --rc genhtml_branch_coverage=1 00:06:07.979 --rc genhtml_function_coverage=1 00:06:07.979 --rc genhtml_legend=1 00:06:07.979 --rc geninfo_all_blocks=1 00:06:07.979 --rc geninfo_unexecuted_blocks=1 00:06:07.979 00:06:07.979 ' 00:06:07.979 10:11:47 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.979 --rc genhtml_branch_coverage=1 00:06:07.979 --rc genhtml_function_coverage=1 00:06:07.979 --rc genhtml_legend=1 00:06:07.979 --rc geninfo_all_blocks=1 00:06:07.979 --rc geninfo_unexecuted_blocks=1 00:06:07.979 00:06:07.979 ' 00:06:07.979 10:11:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3994a1af-dd19-4228-ab77-2da8b76d5ca6 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=3994a1af-dd19-4228-ab77-2da8b76d5ca6 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.979 10:11:47 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.979 10:11:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.979 10:11:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.979 10:11:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.979 10:11:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:07.979 10:11:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:07.979 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:07.979 10:11:47 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:07.979 10:11:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:07.979 10:11:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:07.979 10:11:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:07.979 10:11:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:07.979 10:11:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:07.979 10:11:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:07.979 10:11:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:07.979 10:11:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:07.979 10:11:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:07.979 INFO: launching applications... 00:06:07.979 10:11:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:07.979 10:11:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:07.979 10:11:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:07.979 10:11:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:07.979 10:11:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:07.979 10:11:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:07.979 10:11:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:07.979 10:11:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:07.980 10:11:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:07.980 10:11:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:07.980 10:11:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69716 00:06:07.980 Waiting for target to run... 00:06:07.980 10:11:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:07.980 10:11:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69716 /var/tmp/spdk_tgt.sock 00:06:07.980 10:11:47 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 69716 ']' 00:06:07.980 10:11:47 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:07.980 10:11:47 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:07.980 10:11:47 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:07.980 10:11:47 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.980 10:11:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:07.980 10:11:47 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:07.980 [2024-11-29 10:11:47.238007] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:07.980 [2024-11-29 10:11:47.238123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69716 ] 00:06:08.267 [2024-11-29 10:11:47.547248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.267 [2024-11-29 10:11:47.558556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.525 10:11:47 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.525 10:11:47 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:08.525 00:06:08.525 10:11:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:08.525 INFO: shutting down applications... 00:06:08.525 10:11:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:08.525 10:11:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:08.525 10:11:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:08.525 10:11:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:08.525 10:11:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69716 ]] 00:06:08.525 10:11:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69716 00:06:08.525 10:11:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:08.525 10:11:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.525 10:11:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69716 00:06:08.525 10:11:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.095 10:11:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.095 10:11:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.095 10:11:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69716 00:06:09.095 10:11:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:09.095 SPDK target shutdown done 00:06:09.095 Success 00:06:09.095 10:11:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:09.095 10:11:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:09.095 10:11:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:09.096 10:11:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:09.096 00:06:09.096 real 0m1.448s 00:06:09.096 user 0m1.057s 00:06:09.096 sys 0m0.310s 00:06:09.096 10:11:48 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.096 10:11:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:09.096 ************************************ 00:06:09.096 END TEST json_config_extra_key 00:06:09.096 ************************************ 00:06:09.096 10:11:48 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:09.096 10:11:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.096 10:11:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.096 10:11:48 -- common/autotest_common.sh@10 -- # set +x 00:06:09.096 ************************************ 00:06:09.096 START TEST alias_rpc 00:06:09.096 ************************************ 00:06:09.096 10:11:48 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:09.355 * Looking for test storage... 00:06:09.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:09.355 10:11:48 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.355 10:11:48 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.355 10:11:48 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.355 10:11:48 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.355 10:11:48 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:09.355 10:11:48 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.355 10:11:48 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.355 --rc genhtml_branch_coverage=1 00:06:09.355 --rc genhtml_function_coverage=1 00:06:09.355 --rc genhtml_legend=1 00:06:09.355 --rc geninfo_all_blocks=1 00:06:09.355 --rc geninfo_unexecuted_blocks=1 00:06:09.355 00:06:09.355 ' 00:06:09.355 10:11:48 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.355 --rc genhtml_branch_coverage=1 00:06:09.355 --rc genhtml_function_coverage=1 00:06:09.355 --rc genhtml_legend=1 00:06:09.355 --rc geninfo_all_blocks=1 00:06:09.355 --rc geninfo_unexecuted_blocks=1 00:06:09.355 00:06:09.355 ' 00:06:09.355 10:11:48 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.355 --rc genhtml_branch_coverage=1 00:06:09.355 --rc genhtml_function_coverage=1 00:06:09.355 --rc genhtml_legend=1 00:06:09.355 --rc geninfo_all_blocks=1 00:06:09.355 --rc geninfo_unexecuted_blocks=1 00:06:09.355 00:06:09.355 ' 00:06:09.355 10:11:48 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.355 --rc genhtml_branch_coverage=1 00:06:09.355 --rc genhtml_function_coverage=1 00:06:09.355 --rc genhtml_legend=1 00:06:09.355 --rc geninfo_all_blocks=1 00:06:09.355 --rc geninfo_unexecuted_blocks=1 00:06:09.355 00:06:09.355 ' 00:06:09.355 10:11:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:09.355 10:11:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69789 00:06:09.355 10:11:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69789 00:06:09.355 10:11:48 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 69789 ']' 00:06:09.355 10:11:48 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.355 10:11:48 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.355 10:11:48 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.355 10:11:48 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.355 10:11:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.355 10:11:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.355 [2024-11-29 10:11:48.745882] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:09.355 [2024-11-29 10:11:48.745998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69789 ] 00:06:09.614 [2024-11-29 10:11:48.887318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.614 [2024-11-29 10:11:48.903385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.181 10:11:49 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.181 10:11:49 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:10.181 10:11:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:10.439 10:11:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69789 00:06:10.439 10:11:49 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 69789 ']' 00:06:10.439 10:11:49 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 69789 00:06:10.439 10:11:49 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:10.439 10:11:49 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.439 10:11:49 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69789 00:06:10.439 10:11:49 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.439 killing process with pid 69789 00:06:10.439 10:11:49 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.439 10:11:49 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69789' 00:06:10.439 10:11:49 alias_rpc -- common/autotest_common.sh@973 -- # kill 69789 00:06:10.439 10:11:49 alias_rpc -- common/autotest_common.sh@978 -- # wait 69789 00:06:10.699 00:06:10.699 real 0m1.463s 00:06:10.699 user 0m1.586s 00:06:10.699 sys 0m0.331s 00:06:10.699 10:11:50 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.699 10:11:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.699 ************************************ 00:06:10.699 END TEST alias_rpc 00:06:10.700 ************************************ 00:06:10.700 10:11:50 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:10.700 10:11:50 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:10.700 10:11:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.700 10:11:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.700 10:11:50 -- common/autotest_common.sh@10 -- # set +x 00:06:10.700 ************************************ 00:06:10.700 START TEST spdkcli_tcp 00:06:10.700 ************************************ 00:06:10.700 10:11:50 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:10.700 * Looking for test storage... 00:06:10.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:10.700 10:11:50 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.700 10:11:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.700 10:11:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.960 10:11:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.960 10:11:50 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.960 10:11:50 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.960 10:11:50 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.960 10:11:50 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.960 10:11:50 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.960 10:11:50 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.960 10:11:50 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.960 10:11:50 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.960 10:11:50 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.960 10:11:50 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.960 10:11:50 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.960 10:11:50 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:10.961 10:11:50 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:10.961 10:11:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.961 10:11:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.961 10:11:50 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:10.961 10:11:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:10.961 10:11:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.961 10:11:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:10.961 10:11:50 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.961 10:11:50 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:10.961 10:11:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:10.961 10:11:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.961 10:11:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:10.961 10:11:50 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.961 10:11:50 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.961 10:11:50 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.961 10:11:50 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:10.961 10:11:50 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.961 10:11:50 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.961 --rc genhtml_branch_coverage=1 00:06:10.961 --rc genhtml_function_coverage=1 00:06:10.961 --rc genhtml_legend=1 00:06:10.961 --rc geninfo_all_blocks=1 00:06:10.961 --rc geninfo_unexecuted_blocks=1 00:06:10.961 00:06:10.961 ' 00:06:10.961 10:11:50 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.961 --rc genhtml_branch_coverage=1 00:06:10.961 --rc genhtml_function_coverage=1 00:06:10.961 --rc genhtml_legend=1 00:06:10.961 --rc geninfo_all_blocks=1 00:06:10.961 --rc geninfo_unexecuted_blocks=1 00:06:10.961 00:06:10.961 ' 00:06:10.961 10:11:50 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.961 --rc genhtml_branch_coverage=1 00:06:10.961 --rc genhtml_function_coverage=1 00:06:10.961 --rc genhtml_legend=1 00:06:10.961 --rc geninfo_all_blocks=1 00:06:10.961 --rc geninfo_unexecuted_blocks=1 00:06:10.961 00:06:10.961 ' 00:06:10.961 10:11:50 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.961 --rc genhtml_branch_coverage=1 00:06:10.961 --rc genhtml_function_coverage=1 00:06:10.961 --rc genhtml_legend=1 00:06:10.961 --rc geninfo_all_blocks=1 00:06:10.961 --rc geninfo_unexecuted_blocks=1 00:06:10.961 00:06:10.961 ' 00:06:10.961 10:11:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:10.961 10:11:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:10.961 10:11:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:10.961 10:11:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:10.961 10:11:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:10.961 10:11:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:10.961 10:11:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:10.961 10:11:50 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.961 10:11:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:10.961 10:11:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69869 00:06:10.961 10:11:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69869 00:06:10.961 10:11:50 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 69869 ']' 00:06:10.961 10:11:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:10.961 10:11:50 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.961 10:11:50 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.961 10:11:50 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.961 10:11:50 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.961 10:11:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:10.961 [2024-11-29 10:11:50.280051] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:10.961 [2024-11-29 10:11:50.280167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69869 ] 00:06:10.961 [2024-11-29 10:11:50.419164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.221 [2024-11-29 10:11:50.439211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.221 [2024-11-29 10:11:50.439321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.792 10:11:51 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.792 10:11:51 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:11.792 10:11:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69886 00:06:11.792 10:11:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:11.792 10:11:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:12.054 [ 00:06:12.054 "bdev_malloc_delete", 00:06:12.054 "bdev_malloc_create", 00:06:12.054 "bdev_null_resize", 00:06:12.054 "bdev_null_delete", 00:06:12.054 "bdev_null_create", 00:06:12.054 "bdev_nvme_cuse_unregister", 00:06:12.054 "bdev_nvme_cuse_register", 00:06:12.054 "bdev_opal_new_user", 00:06:12.054 "bdev_opal_set_lock_state", 00:06:12.054 "bdev_opal_delete", 00:06:12.054 "bdev_opal_get_info", 00:06:12.054 "bdev_opal_create", 00:06:12.054 "bdev_nvme_opal_revert", 00:06:12.054 "bdev_nvme_opal_init", 00:06:12.054 "bdev_nvme_send_cmd", 00:06:12.054 "bdev_nvme_set_keys", 00:06:12.054 "bdev_nvme_get_path_iostat", 00:06:12.054 "bdev_nvme_get_mdns_discovery_info", 00:06:12.054 "bdev_nvme_stop_mdns_discovery", 00:06:12.054 "bdev_nvme_start_mdns_discovery", 00:06:12.054 "bdev_nvme_set_multipath_policy", 00:06:12.054 "bdev_nvme_set_preferred_path", 00:06:12.054 "bdev_nvme_get_io_paths", 00:06:12.054 "bdev_nvme_remove_error_injection", 00:06:12.054 "bdev_nvme_add_error_injection", 00:06:12.054 "bdev_nvme_get_discovery_info", 00:06:12.054 "bdev_nvme_stop_discovery", 00:06:12.054 "bdev_nvme_start_discovery", 00:06:12.054 "bdev_nvme_get_controller_health_info", 00:06:12.054 "bdev_nvme_disable_controller", 00:06:12.054 "bdev_nvme_enable_controller", 00:06:12.054 "bdev_nvme_reset_controller", 00:06:12.054 "bdev_nvme_get_transport_statistics", 00:06:12.054 "bdev_nvme_apply_firmware", 00:06:12.054 "bdev_nvme_detach_controller", 00:06:12.054 "bdev_nvme_get_controllers", 00:06:12.054 "bdev_nvme_attach_controller", 00:06:12.054 "bdev_nvme_set_hotplug", 00:06:12.054 "bdev_nvme_set_options", 00:06:12.054 "bdev_passthru_delete", 00:06:12.054 "bdev_passthru_create", 00:06:12.054 "bdev_lvol_set_parent_bdev", 00:06:12.054 "bdev_lvol_set_parent", 00:06:12.054 "bdev_lvol_check_shallow_copy", 00:06:12.054 "bdev_lvol_start_shallow_copy", 00:06:12.054 "bdev_lvol_grow_lvstore", 00:06:12.054 "bdev_lvol_get_lvols", 00:06:12.054 "bdev_lvol_get_lvstores", 00:06:12.054 "bdev_lvol_delete", 00:06:12.054 "bdev_lvol_set_read_only", 00:06:12.054 "bdev_lvol_resize", 00:06:12.054 "bdev_lvol_decouple_parent", 00:06:12.054 "bdev_lvol_inflate", 00:06:12.054 "bdev_lvol_rename", 00:06:12.054 "bdev_lvol_clone_bdev", 00:06:12.054 "bdev_lvol_clone", 00:06:12.054 "bdev_lvol_snapshot", 00:06:12.054 "bdev_lvol_create", 00:06:12.054 "bdev_lvol_delete_lvstore", 00:06:12.054 "bdev_lvol_rename_lvstore", 00:06:12.054 "bdev_lvol_create_lvstore", 00:06:12.054 "bdev_raid_set_options", 00:06:12.054 "bdev_raid_remove_base_bdev", 00:06:12.054 "bdev_raid_add_base_bdev", 00:06:12.054 "bdev_raid_delete", 00:06:12.054 "bdev_raid_create", 00:06:12.054 "bdev_raid_get_bdevs", 00:06:12.054 "bdev_error_inject_error", 00:06:12.054 "bdev_error_delete", 00:06:12.054 "bdev_error_create", 00:06:12.054 "bdev_split_delete", 00:06:12.054 "bdev_split_create", 00:06:12.054 "bdev_delay_delete", 00:06:12.054 "bdev_delay_create", 00:06:12.054 "bdev_delay_update_latency", 00:06:12.054 "bdev_zone_block_delete", 00:06:12.054 "bdev_zone_block_create", 00:06:12.054 "blobfs_create", 00:06:12.054 "blobfs_detect", 00:06:12.054 "blobfs_set_cache_size", 00:06:12.054 "bdev_xnvme_delete", 00:06:12.054 "bdev_xnvme_create", 00:06:12.054 "bdev_aio_delete", 00:06:12.054 "bdev_aio_rescan", 00:06:12.054 "bdev_aio_create", 00:06:12.054 "bdev_ftl_set_property", 00:06:12.054 "bdev_ftl_get_properties", 00:06:12.054 "bdev_ftl_get_stats", 00:06:12.054 "bdev_ftl_unmap", 00:06:12.054 "bdev_ftl_unload", 00:06:12.054 "bdev_ftl_delete", 00:06:12.054 "bdev_ftl_load", 00:06:12.054 "bdev_ftl_create", 00:06:12.054 "bdev_virtio_attach_controller", 00:06:12.054 "bdev_virtio_scsi_get_devices", 00:06:12.054 "bdev_virtio_detach_controller", 00:06:12.054 "bdev_virtio_blk_set_hotplug", 00:06:12.054 "bdev_iscsi_delete", 00:06:12.054 "bdev_iscsi_create", 00:06:12.054 "bdev_iscsi_set_options", 00:06:12.054 "accel_error_inject_error", 00:06:12.054 "ioat_scan_accel_module", 00:06:12.054 "dsa_scan_accel_module", 00:06:12.054 "iaa_scan_accel_module", 00:06:12.054 "keyring_file_remove_key", 00:06:12.054 "keyring_file_add_key", 00:06:12.054 "keyring_linux_set_options", 00:06:12.054 "fsdev_aio_delete", 00:06:12.054 "fsdev_aio_create", 00:06:12.054 "iscsi_get_histogram", 00:06:12.054 "iscsi_enable_histogram", 00:06:12.054 "iscsi_set_options", 00:06:12.054 "iscsi_get_auth_groups", 00:06:12.054 "iscsi_auth_group_remove_secret", 00:06:12.054 "iscsi_auth_group_add_secret", 00:06:12.054 "iscsi_delete_auth_group", 00:06:12.054 "iscsi_create_auth_group", 00:06:12.054 "iscsi_set_discovery_auth", 00:06:12.054 "iscsi_get_options", 00:06:12.054 "iscsi_target_node_request_logout", 00:06:12.054 "iscsi_target_node_set_redirect", 00:06:12.054 "iscsi_target_node_set_auth", 00:06:12.054 "iscsi_target_node_add_lun", 00:06:12.054 "iscsi_get_stats", 00:06:12.054 "iscsi_get_connections", 00:06:12.054 "iscsi_portal_group_set_auth", 00:06:12.055 "iscsi_start_portal_group", 00:06:12.055 "iscsi_delete_portal_group", 00:06:12.055 "iscsi_create_portal_group", 00:06:12.055 "iscsi_get_portal_groups", 00:06:12.055 "iscsi_delete_target_node", 00:06:12.055 "iscsi_target_node_remove_pg_ig_maps", 00:06:12.055 "iscsi_target_node_add_pg_ig_maps", 00:06:12.055 "iscsi_create_target_node", 00:06:12.055 "iscsi_get_target_nodes", 00:06:12.055 "iscsi_delete_initiator_group", 00:06:12.055 "iscsi_initiator_group_remove_initiators", 00:06:12.055 "iscsi_initiator_group_add_initiators", 00:06:12.055 "iscsi_create_initiator_group", 00:06:12.055 "iscsi_get_initiator_groups", 00:06:12.055 "nvmf_set_crdt", 00:06:12.055 "nvmf_set_config", 00:06:12.055 "nvmf_set_max_subsystems", 00:06:12.055 "nvmf_stop_mdns_prr", 00:06:12.055 "nvmf_publish_mdns_prr", 00:06:12.055 "nvmf_subsystem_get_listeners", 00:06:12.055 "nvmf_subsystem_get_qpairs", 00:06:12.055 "nvmf_subsystem_get_controllers", 00:06:12.055 "nvmf_get_stats", 00:06:12.055 "nvmf_get_transports", 00:06:12.055 "nvmf_create_transport", 00:06:12.055 "nvmf_get_targets", 00:06:12.055 "nvmf_delete_target", 00:06:12.055 "nvmf_create_target", 00:06:12.055 "nvmf_subsystem_allow_any_host", 00:06:12.055 "nvmf_subsystem_set_keys", 00:06:12.055 "nvmf_subsystem_remove_host", 00:06:12.055 "nvmf_subsystem_add_host", 00:06:12.055 "nvmf_ns_remove_host", 00:06:12.055 "nvmf_ns_add_host", 00:06:12.055 "nvmf_subsystem_remove_ns", 00:06:12.055 "nvmf_subsystem_set_ns_ana_group", 00:06:12.055 "nvmf_subsystem_add_ns", 00:06:12.055 "nvmf_subsystem_listener_set_ana_state", 00:06:12.055 "nvmf_discovery_get_referrals", 00:06:12.055 "nvmf_discovery_remove_referral", 00:06:12.055 "nvmf_discovery_add_referral", 00:06:12.055 "nvmf_subsystem_remove_listener", 00:06:12.055 "nvmf_subsystem_add_listener", 00:06:12.055 "nvmf_delete_subsystem", 00:06:12.055 "nvmf_create_subsystem", 00:06:12.055 "nvmf_get_subsystems", 00:06:12.055 "env_dpdk_get_mem_stats", 00:06:12.055 "nbd_get_disks", 00:06:12.055 "nbd_stop_disk", 00:06:12.055 "nbd_start_disk", 00:06:12.055 "ublk_recover_disk", 00:06:12.055 "ublk_get_disks", 00:06:12.055 "ublk_stop_disk", 00:06:12.055 "ublk_start_disk", 00:06:12.055 "ublk_destroy_target", 00:06:12.055 "ublk_create_target", 00:06:12.055 "virtio_blk_create_transport", 00:06:12.055 "virtio_blk_get_transports", 00:06:12.055 "vhost_controller_set_coalescing", 00:06:12.055 "vhost_get_controllers", 00:06:12.055 "vhost_delete_controller", 00:06:12.055 "vhost_create_blk_controller", 00:06:12.055 "vhost_scsi_controller_remove_target", 00:06:12.055 "vhost_scsi_controller_add_target", 00:06:12.055 "vhost_start_scsi_controller", 00:06:12.055 "vhost_create_scsi_controller", 00:06:12.055 "thread_set_cpumask", 00:06:12.055 "scheduler_set_options", 00:06:12.055 "framework_get_governor", 00:06:12.055 "framework_get_scheduler", 00:06:12.055 "framework_set_scheduler", 00:06:12.055 "framework_get_reactors", 00:06:12.055 "thread_get_io_channels", 00:06:12.055 "thread_get_pollers", 00:06:12.055 "thread_get_stats", 00:06:12.055 "framework_monitor_context_switch", 00:06:12.055 "spdk_kill_instance", 00:06:12.055 "log_enable_timestamps", 00:06:12.055 "log_get_flags", 00:06:12.055 "log_clear_flag", 00:06:12.055 "log_set_flag", 00:06:12.055 "log_get_level", 00:06:12.055 "log_set_level", 00:06:12.055 "log_get_print_level", 00:06:12.055 "log_set_print_level", 00:06:12.055 "framework_enable_cpumask_locks", 00:06:12.055 "framework_disable_cpumask_locks", 00:06:12.055 "framework_wait_init", 00:06:12.055 "framework_start_init", 00:06:12.055 "scsi_get_devices", 00:06:12.055 "bdev_get_histogram", 00:06:12.055 "bdev_enable_histogram", 00:06:12.055 "bdev_set_qos_limit", 00:06:12.055 "bdev_set_qd_sampling_period", 00:06:12.055 "bdev_get_bdevs", 00:06:12.055 "bdev_reset_iostat", 00:06:12.055 "bdev_get_iostat", 00:06:12.055 "bdev_examine", 00:06:12.055 "bdev_wait_for_examine", 00:06:12.055 "bdev_set_options", 00:06:12.055 "accel_get_stats", 00:06:12.055 "accel_set_options", 00:06:12.055 "accel_set_driver", 00:06:12.055 "accel_crypto_key_destroy", 00:06:12.055 "accel_crypto_keys_get", 00:06:12.055 "accel_crypto_key_create", 00:06:12.055 "accel_assign_opc", 00:06:12.055 "accel_get_module_info", 00:06:12.055 "accel_get_opc_assignments", 00:06:12.055 "vmd_rescan", 00:06:12.055 "vmd_remove_device", 00:06:12.055 "vmd_enable", 00:06:12.055 "sock_get_default_impl", 00:06:12.055 "sock_set_default_impl", 00:06:12.055 "sock_impl_set_options", 00:06:12.055 "sock_impl_get_options", 00:06:12.055 "iobuf_get_stats", 00:06:12.055 "iobuf_set_options", 00:06:12.055 "keyring_get_keys", 00:06:12.055 "framework_get_pci_devices", 00:06:12.055 "framework_get_config", 00:06:12.055 "framework_get_subsystems", 00:06:12.055 "fsdev_set_opts", 00:06:12.055 "fsdev_get_opts", 00:06:12.055 "trace_get_info", 00:06:12.055 "trace_get_tpoint_group_mask", 00:06:12.055 "trace_disable_tpoint_group", 00:06:12.055 "trace_enable_tpoint_group", 00:06:12.055 "trace_clear_tpoint_mask", 00:06:12.055 "trace_set_tpoint_mask", 00:06:12.055 "notify_get_notifications", 00:06:12.055 "notify_get_types", 00:06:12.055 "spdk_get_version", 00:06:12.055 "rpc_get_methods" 00:06:12.055 ] 00:06:12.055 10:11:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:12.055 10:11:51 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:12.055 10:11:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.055 10:11:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:12.055 10:11:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69869 00:06:12.055 10:11:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 69869 ']' 00:06:12.055 10:11:51 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 69869 00:06:12.055 10:11:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:12.055 10:11:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.055 10:11:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69869 00:06:12.055 10:11:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.055 killing process with pid 69869 00:06:12.055 10:11:51 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.055 10:11:51 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69869' 00:06:12.055 10:11:51 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 69869 00:06:12.055 10:11:51 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 69869 00:06:12.315 00:06:12.315 real 0m1.587s 00:06:12.315 user 0m2.864s 00:06:12.315 sys 0m0.397s 00:06:12.315 10:11:51 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.315 ************************************ 00:06:12.315 END TEST spdkcli_tcp 00:06:12.315 ************************************ 00:06:12.315 10:11:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.315 10:11:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:12.315 10:11:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.315 10:11:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.315 10:11:51 -- common/autotest_common.sh@10 -- # set +x 00:06:12.315 ************************************ 00:06:12.315 START TEST dpdk_mem_utility 00:06:12.315 ************************************ 00:06:12.315 10:11:51 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:12.315 * Looking for test storage... 00:06:12.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:12.576 10:11:51 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.576 10:11:51 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.576 10:11:51 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.576 10:11:51 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.576 10:11:51 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:12.576 10:11:51 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.576 10:11:51 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.576 --rc genhtml_branch_coverage=1 00:06:12.576 --rc genhtml_function_coverage=1 00:06:12.576 --rc genhtml_legend=1 00:06:12.576 --rc geninfo_all_blocks=1 00:06:12.576 --rc geninfo_unexecuted_blocks=1 00:06:12.576 00:06:12.576 ' 00:06:12.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.576 10:11:51 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.576 --rc genhtml_branch_coverage=1 00:06:12.576 --rc genhtml_function_coverage=1 00:06:12.576 --rc genhtml_legend=1 00:06:12.576 --rc geninfo_all_blocks=1 00:06:12.576 --rc geninfo_unexecuted_blocks=1 00:06:12.576 00:06:12.576 ' 00:06:12.576 10:11:51 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.576 --rc genhtml_branch_coverage=1 00:06:12.576 --rc genhtml_function_coverage=1 00:06:12.576 --rc genhtml_legend=1 00:06:12.576 --rc geninfo_all_blocks=1 00:06:12.576 --rc geninfo_unexecuted_blocks=1 00:06:12.576 00:06:12.576 ' 00:06:12.576 10:11:51 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.576 --rc genhtml_branch_coverage=1 00:06:12.576 --rc genhtml_function_coverage=1 00:06:12.576 --rc genhtml_legend=1 00:06:12.576 --rc geninfo_all_blocks=1 00:06:12.576 --rc geninfo_unexecuted_blocks=1 00:06:12.576 00:06:12.576 ' 00:06:12.576 10:11:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:12.576 10:11:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69963 00:06:12.576 10:11:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69963 00:06:12.576 10:11:51 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 69963 ']' 00:06:12.577 10:11:51 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.577 10:11:51 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.577 10:11:51 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.577 10:11:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.577 10:11:51 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.577 10:11:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:12.577 [2024-11-29 10:11:51.932479] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:12.577 [2024-11-29 10:11:51.932906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69963 ] 00:06:12.837 [2024-11-29 10:11:52.076674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.837 [2024-11-29 10:11:52.095753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.410 10:11:52 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.410 10:11:52 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:13.410 10:11:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:13.410 10:11:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:13.410 10:11:52 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.410 10:11:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:13.410 { 00:06:13.410 "filename": "/tmp/spdk_mem_dump.txt" 00:06:13.410 } 00:06:13.410 10:11:52 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.410 10:11:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:13.410 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:13.410 1 heaps totaling size 818.000000 MiB 00:06:13.410 size: 818.000000 MiB heap id: 0 00:06:13.410 end heaps---------- 00:06:13.410 9 mempools totaling size 603.782043 MiB 00:06:13.410 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:13.410 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:13.410 size: 100.555481 MiB name: bdev_io_69963 00:06:13.410 size: 50.003479 MiB name: msgpool_69963 00:06:13.410 size: 36.509338 MiB name: fsdev_io_69963 00:06:13.410 size: 21.763794 MiB name: PDU_Pool 00:06:13.410 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:13.410 size: 4.133484 MiB name: evtpool_69963 00:06:13.410 size: 0.026123 MiB name: Session_Pool 00:06:13.410 end mempools------- 00:06:13.410 6 memzones totaling size 4.142822 MiB 00:06:13.410 size: 1.000366 MiB name: RG_ring_0_69963 00:06:13.410 size: 1.000366 MiB name: RG_ring_1_69963 00:06:13.410 size: 1.000366 MiB name: RG_ring_4_69963 00:06:13.410 size: 1.000366 MiB name: RG_ring_5_69963 00:06:13.410 size: 0.125366 MiB name: RG_ring_2_69963 00:06:13.410 size: 0.015991 MiB name: RG_ring_3_69963 00:06:13.410 end memzones------- 00:06:13.410 10:11:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:13.410 heap id: 0 total size: 818.000000 MiB number of busy elements: 313 number of free elements: 15 00:06:13.410 list of free elements. size: 10.803223 MiB 00:06:13.410 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:13.410 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:13.410 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:13.410 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:13.410 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:13.410 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:13.410 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:13.410 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:13.410 element at address: 0x20001ae00000 with size: 0.567688 MiB 00:06:13.410 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:13.410 element at address: 0x200000c00000 with size: 0.486267 MiB 00:06:13.410 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:13.410 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:13.410 element at address: 0x200028200000 with size: 0.396484 MiB 00:06:13.410 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:13.410 list of standard malloc elements. size: 199.267883 MiB 00:06:13.410 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:13.410 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:13.410 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:13.410 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:13.410 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:13.410 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:13.410 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:13.410 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:13.410 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:13.410 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:13.410 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:13.411 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:13.411 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:13.411 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:13.411 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:13.411 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:13.411 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:13.411 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:06:13.411 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:13.412 element at address: 0x200028265800 with size: 0.000183 MiB 00:06:13.412 element at address: 0x2000282658c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826c4c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826c780 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826c840 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826c900 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826d080 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826d140 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826d200 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826d380 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826d440 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826d500 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826d680 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826d740 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826d800 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826d980 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826da40 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826db00 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826de00 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826df80 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826e040 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826e100 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826e280 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826e340 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826e400 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826e580 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826e640 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826e700 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826e880 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826e940 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826f000 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826f180 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826f240 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826f300 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826f480 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826f540 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826f600 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826f780 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826f840 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826f900 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:13.412 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:13.412 list of memzone associated elements. size: 607.928894 MiB 00:06:13.412 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:13.412 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:13.412 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:13.412 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:13.412 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:13.413 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_69963_0 00:06:13.413 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:13.413 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69963_0 00:06:13.413 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:13.413 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69963_0 00:06:13.413 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:13.413 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:13.413 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:13.413 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:13.413 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:13.413 associated memzone info: size: 3.000122 MiB name: MP_evtpool_69963_0 00:06:13.413 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:13.413 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69963 00:06:13.413 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:13.413 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69963 00:06:13.413 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:13.413 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:13.413 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:13.413 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:13.413 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:13.413 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:13.413 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:13.413 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:13.413 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:13.413 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69963 00:06:13.413 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:13.413 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69963 00:06:13.413 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:13.413 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69963 00:06:13.413 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:13.413 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69963 00:06:13.413 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:13.413 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69963 00:06:13.413 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:13.413 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69963 00:06:13.413 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:13.413 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:13.413 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:13.413 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:13.413 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:13.413 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:13.413 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:13.413 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_69963 00:06:13.413 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:13.413 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69963 00:06:13.413 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:13.413 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:13.413 element at address: 0x200028265980 with size: 0.023743 MiB 00:06:13.413 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:13.413 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:13.413 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69963 00:06:13.413 element at address: 0x20002826bac0 with size: 0.002441 MiB 00:06:13.413 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:13.413 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:13.413 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69963 00:06:13.413 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:13.413 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69963 00:06:13.413 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:13.413 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69963 00:06:13.413 element at address: 0x20002826c580 with size: 0.000305 MiB 00:06:13.413 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:13.673 10:11:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:13.673 10:11:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69963 00:06:13.673 10:11:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 69963 ']' 00:06:13.673 10:11:52 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 69963 00:06:13.674 10:11:52 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:13.674 10:11:52 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.674 10:11:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69963 00:06:13.674 killing process with pid 69963 00:06:13.674 10:11:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.674 10:11:52 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.674 10:11:52 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69963' 00:06:13.674 10:11:52 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 69963 00:06:13.674 10:11:52 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 69963 00:06:13.933 00:06:13.933 real 0m1.441s 00:06:13.933 user 0m1.504s 00:06:13.933 sys 0m0.354s 00:06:13.933 10:11:53 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.933 ************************************ 00:06:13.933 END TEST dpdk_mem_utility 00:06:13.933 10:11:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:13.934 ************************************ 00:06:13.934 10:11:53 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:13.934 10:11:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.934 10:11:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.934 10:11:53 -- common/autotest_common.sh@10 -- # set +x 00:06:13.934 ************************************ 00:06:13.934 START TEST event 00:06:13.934 ************************************ 00:06:13.934 10:11:53 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:13.934 * Looking for test storage... 00:06:13.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:13.934 10:11:53 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:13.934 10:11:53 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:13.934 10:11:53 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:13.934 10:11:53 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:13.934 10:11:53 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.934 10:11:53 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.934 10:11:53 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.934 10:11:53 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.934 10:11:53 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.934 10:11:53 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.934 10:11:53 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.934 10:11:53 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.934 10:11:53 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.934 10:11:53 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.934 10:11:53 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.934 10:11:53 event -- scripts/common.sh@344 -- # case "$op" in 00:06:13.934 10:11:53 event -- scripts/common.sh@345 -- # : 1 00:06:13.934 10:11:53 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.934 10:11:53 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.934 10:11:53 event -- scripts/common.sh@365 -- # decimal 1 00:06:13.934 10:11:53 event -- scripts/common.sh@353 -- # local d=1 00:06:13.934 10:11:53 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.934 10:11:53 event -- scripts/common.sh@355 -- # echo 1 00:06:13.934 10:11:53 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.934 10:11:53 event -- scripts/common.sh@366 -- # decimal 2 00:06:13.934 10:11:53 event -- scripts/common.sh@353 -- # local d=2 00:06:13.934 10:11:53 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.934 10:11:53 event -- scripts/common.sh@355 -- # echo 2 00:06:13.934 10:11:53 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.934 10:11:53 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.934 10:11:53 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.934 10:11:53 event -- scripts/common.sh@368 -- # return 0 00:06:13.934 10:11:53 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.934 10:11:53 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:13.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.934 --rc genhtml_branch_coverage=1 00:06:13.934 --rc genhtml_function_coverage=1 00:06:13.934 --rc genhtml_legend=1 00:06:13.934 --rc geninfo_all_blocks=1 00:06:13.934 --rc geninfo_unexecuted_blocks=1 00:06:13.934 00:06:13.934 ' 00:06:13.934 10:11:53 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:13.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.934 --rc genhtml_branch_coverage=1 00:06:13.934 --rc genhtml_function_coverage=1 00:06:13.934 --rc genhtml_legend=1 00:06:13.934 --rc geninfo_all_blocks=1 00:06:13.934 --rc geninfo_unexecuted_blocks=1 00:06:13.934 00:06:13.934 ' 00:06:13.934 10:11:53 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:13.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.934 --rc genhtml_branch_coverage=1 00:06:13.934 --rc genhtml_function_coverage=1 00:06:13.934 --rc genhtml_legend=1 00:06:13.934 --rc geninfo_all_blocks=1 00:06:13.934 --rc geninfo_unexecuted_blocks=1 00:06:13.934 00:06:13.934 ' 00:06:13.934 10:11:53 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:13.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.934 --rc genhtml_branch_coverage=1 00:06:13.934 --rc genhtml_function_coverage=1 00:06:13.934 --rc genhtml_legend=1 00:06:13.934 --rc geninfo_all_blocks=1 00:06:13.934 --rc geninfo_unexecuted_blocks=1 00:06:13.934 00:06:13.934 ' 00:06:13.934 10:11:53 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:13.934 10:11:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:13.934 10:11:53 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:13.934 10:11:53 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:13.934 10:11:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.934 10:11:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.934 ************************************ 00:06:13.934 START TEST event_perf 00:06:13.934 ************************************ 00:06:13.934 10:11:53 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:14.194 Running I/O for 1 seconds...[2024-11-29 10:11:53.397016] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:14.194 [2024-11-29 10:11:53.397133] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70044 ] 00:06:14.194 [2024-11-29 10:11:53.539503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.194 [2024-11-29 10:11:53.561265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.194 [2024-11-29 10:11:53.561587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.194 [2024-11-29 10:11:53.562029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.194 [2024-11-29 10:11:53.562115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.130 Running I/O for 1 seconds... 00:06:15.130 lcore 0: 204242 00:06:15.130 lcore 1: 204241 00:06:15.130 lcore 2: 204244 00:06:15.130 lcore 3: 204242 00:06:15.391 done. 00:06:15.391 00:06:15.391 real 0m1.236s 00:06:15.391 user 0m4.057s 00:06:15.391 sys 0m0.062s 00:06:15.391 10:11:54 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.391 10:11:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:15.391 ************************************ 00:06:15.391 END TEST event_perf 00:06:15.391 ************************************ 00:06:15.391 10:11:54 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:15.391 10:11:54 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:15.391 10:11:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.391 10:11:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.391 ************************************ 00:06:15.391 START TEST event_reactor 00:06:15.391 ************************************ 00:06:15.391 10:11:54 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:15.391 [2024-11-29 10:11:54.682988] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:15.391 [2024-11-29 10:11:54.683091] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70083 ] 00:06:15.391 [2024-11-29 10:11:54.828077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.391 [2024-11-29 10:11:54.848357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.772 test_start 00:06:16.772 oneshot 00:06:16.772 tick 100 00:06:16.772 tick 100 00:06:16.772 tick 250 00:06:16.772 tick 100 00:06:16.772 tick 100 00:06:16.772 tick 250 00:06:16.772 tick 100 00:06:16.772 tick 500 00:06:16.772 tick 100 00:06:16.772 tick 100 00:06:16.772 tick 250 00:06:16.772 tick 100 00:06:16.772 tick 100 00:06:16.772 test_end 00:06:16.772 00:06:16.772 real 0m1.232s 00:06:16.772 user 0m1.079s 00:06:16.772 sys 0m0.044s 00:06:16.772 10:11:55 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.772 10:11:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:16.772 ************************************ 00:06:16.772 END TEST event_reactor 00:06:16.772 ************************************ 00:06:16.772 10:11:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:16.772 10:11:55 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:16.772 10:11:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.772 10:11:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.772 ************************************ 00:06:16.772 START TEST event_reactor_perf 00:06:16.772 ************************************ 00:06:16.772 10:11:55 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:16.772 [2024-11-29 10:11:55.974664] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:16.772 [2024-11-29 10:11:55.974770] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70114 ] 00:06:16.772 [2024-11-29 10:11:56.118920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.772 [2024-11-29 10:11:56.143513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.156 test_start 00:06:18.156 test_end 00:06:18.156 Performance: 312956 events per second 00:06:18.156 00:06:18.156 real 0m1.238s 00:06:18.156 user 0m1.076s 00:06:18.156 sys 0m0.054s 00:06:18.156 10:11:57 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.156 ************************************ 00:06:18.156 END TEST event_reactor_perf 00:06:18.156 ************************************ 00:06:18.156 10:11:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.156 10:11:57 event -- event/event.sh@49 -- # uname -s 00:06:18.156 10:11:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:18.156 10:11:57 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:18.156 10:11:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.156 10:11:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.156 10:11:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.156 ************************************ 00:06:18.156 START TEST event_scheduler 00:06:18.156 ************************************ 00:06:18.156 10:11:57 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:18.156 * Looking for test storage... 00:06:18.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:18.156 10:11:57 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.156 10:11:57 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.156 10:11:57 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.156 10:11:57 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:18.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.156 10:11:57 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:18.156 10:11:57 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.156 10:11:57 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.156 --rc genhtml_branch_coverage=1 00:06:18.156 --rc genhtml_function_coverage=1 00:06:18.156 --rc genhtml_legend=1 00:06:18.156 --rc geninfo_all_blocks=1 00:06:18.156 --rc geninfo_unexecuted_blocks=1 00:06:18.156 00:06:18.156 ' 00:06:18.156 10:11:57 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.156 --rc genhtml_branch_coverage=1 00:06:18.156 --rc genhtml_function_coverage=1 00:06:18.156 --rc genhtml_legend=1 00:06:18.156 --rc geninfo_all_blocks=1 00:06:18.156 --rc geninfo_unexecuted_blocks=1 00:06:18.156 00:06:18.156 ' 00:06:18.156 10:11:57 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.156 --rc genhtml_branch_coverage=1 00:06:18.156 --rc genhtml_function_coverage=1 00:06:18.156 --rc genhtml_legend=1 00:06:18.156 --rc geninfo_all_blocks=1 00:06:18.156 --rc geninfo_unexecuted_blocks=1 00:06:18.156 00:06:18.156 ' 00:06:18.156 10:11:57 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.156 --rc genhtml_branch_coverage=1 00:06:18.156 --rc genhtml_function_coverage=1 00:06:18.156 --rc genhtml_legend=1 00:06:18.156 --rc geninfo_all_blocks=1 00:06:18.156 --rc geninfo_unexecuted_blocks=1 00:06:18.156 00:06:18.156 ' 00:06:18.156 10:11:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:18.156 10:11:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70185 00:06:18.156 10:11:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.156 10:11:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:18.157 10:11:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70185 00:06:18.157 10:11:57 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70185 ']' 00:06:18.157 10:11:57 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.157 10:11:57 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.157 10:11:57 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.157 10:11:57 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.157 10:11:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.157 [2024-11-29 10:11:57.458496] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:18.157 [2024-11-29 10:11:57.459048] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70185 ] 00:06:18.157 [2024-11-29 10:11:57.601885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:18.418 [2024-11-29 10:11:57.624217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.418 [2024-11-29 10:11:57.624486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.418 [2024-11-29 10:11:57.624871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.418 [2024-11-29 10:11:57.625046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.989 10:11:58 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.989 10:11:58 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:18.989 10:11:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:18.989 10:11:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.989 10:11:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.989 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:18.989 POWER: Cannot set governor of lcore 0 to userspace 00:06:18.989 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:18.989 POWER: Cannot set governor of lcore 0 to performance 00:06:18.989 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:18.989 POWER: Cannot set governor of lcore 0 to userspace 00:06:18.989 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:18.989 POWER: Unable to set Power Management Environment for lcore 0 00:06:18.989 [2024-11-29 10:11:58.267129] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:18.989 [2024-11-29 10:11:58.267152] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:18.989 [2024-11-29 10:11:58.267173] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:18.989 [2024-11-29 10:11:58.267201] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:18.989 [2024-11-29 10:11:58.267209] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:18.989 [2024-11-29 10:11:58.267218] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:18.989 10:11:58 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.989 10:11:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:18.989 10:11:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.989 10:11:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.989 [2024-11-29 10:11:58.323524] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:18.989 10:11:58 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.989 10:11:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:18.989 10:11:58 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.989 10:11:58 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.989 10:11:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.989 ************************************ 00:06:18.989 START TEST scheduler_create_thread 00:06:18.989 ************************************ 00:06:18.989 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:18.989 10:11:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:18.989 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.989 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.989 2 00:06:18.989 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.989 10:11:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:18.989 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.989 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.989 3 00:06:18.989 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.989 10:11:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:18.989 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.989 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.989 4 00:06:18.989 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.989 10:11:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.990 5 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.990 6 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.990 7 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.990 8 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.990 9 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.990 10 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.990 10:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.907 10:11:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.907 10:11:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:20.907 10:11:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:20.907 10:11:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.907 10:11:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.506 10:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.506 00:06:21.506 real 0m2.608s 00:06:21.506 user 0m0.016s 00:06:21.506 sys 0m0.005s 00:06:21.506 10:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.506 ************************************ 00:06:21.506 END TEST scheduler_create_thread 00:06:21.506 ************************************ 00:06:21.506 10:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.764 10:12:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:21.764 10:12:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70185 00:06:21.764 10:12:00 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70185 ']' 00:06:21.764 10:12:00 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70185 00:06:21.764 10:12:00 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:21.764 10:12:00 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.764 10:12:00 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70185 00:06:21.764 killing process with pid 70185 00:06:21.764 10:12:01 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:21.764 10:12:01 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:21.764 10:12:01 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70185' 00:06:21.764 10:12:01 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70185 00:06:21.764 10:12:01 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70185 00:06:22.021 [2024-11-29 10:12:01.426825] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:22.281 00:06:22.281 real 0m4.322s 00:06:22.281 user 0m7.930s 00:06:22.281 sys 0m0.281s 00:06:22.281 10:12:01 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.281 ************************************ 00:06:22.281 END TEST event_scheduler 00:06:22.281 ************************************ 00:06:22.281 10:12:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.281 10:12:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:22.281 10:12:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:22.281 10:12:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.281 10:12:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.281 10:12:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.281 ************************************ 00:06:22.281 START TEST app_repeat 00:06:22.281 ************************************ 00:06:22.281 10:12:01 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:22.281 10:12:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.281 10:12:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.281 10:12:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:22.281 10:12:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.281 10:12:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:22.281 10:12:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:22.281 10:12:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:22.281 Process app_repeat pid: 70280 00:06:22.281 spdk_app_start Round 0 00:06:22.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.281 10:12:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70280 00:06:22.281 10:12:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.281 10:12:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70280' 00:06:22.281 10:12:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.281 10:12:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:22.281 10:12:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70280 /var/tmp/spdk-nbd.sock 00:06:22.281 10:12:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70280 ']' 00:06:22.281 10:12:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.281 10:12:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.281 10:12:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.281 10:12:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.281 10:12:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.281 10:12:01 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:22.281 [2024-11-29 10:12:01.673364] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:22.281 [2024-11-29 10:12:01.673490] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70280 ] 00:06:22.540 [2024-11-29 10:12:01.817352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.540 [2024-11-29 10:12:01.838682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.540 [2024-11-29 10:12:01.838738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.109 10:12:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.109 10:12:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:23.109 10:12:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.370 Malloc0 00:06:23.370 10:12:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.631 Malloc1 00:06:23.631 10:12:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.631 10:12:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.631 10:12:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.631 10:12:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.631 10:12:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.631 10:12:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.631 10:12:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.631 10:12:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.631 10:12:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.631 10:12:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.631 10:12:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.631 10:12:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.631 10:12:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.631 10:12:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.631 10:12:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.631 10:12:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.892 /dev/nbd0 00:06:23.892 10:12:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.893 10:12:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.893 10:12:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:23.893 10:12:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.893 10:12:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.893 10:12:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.893 10:12:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:23.893 10:12:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.893 10:12:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.893 10:12:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.893 10:12:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.893 1+0 records in 00:06:23.893 1+0 records out 00:06:23.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437362 s, 9.4 MB/s 00:06:23.893 10:12:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.893 10:12:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.893 10:12:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.893 10:12:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.893 10:12:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.893 10:12:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.893 10:12:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.893 10:12:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.152 /dev/nbd1 00:06:24.152 10:12:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.152 10:12:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.152 10:12:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:24.152 10:12:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:24.152 10:12:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.152 10:12:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.152 10:12:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:24.152 10:12:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:24.152 10:12:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.152 10:12:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.152 10:12:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.152 1+0 records in 00:06:24.152 1+0 records out 00:06:24.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463985 s, 8.8 MB/s 00:06:24.152 10:12:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.152 10:12:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:24.152 10:12:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.152 10:12:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.152 10:12:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:24.152 10:12:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.153 10:12:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.153 10:12:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.153 10:12:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.153 10:12:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.412 { 00:06:24.412 "nbd_device": "/dev/nbd0", 00:06:24.412 "bdev_name": "Malloc0" 00:06:24.412 }, 00:06:24.412 { 00:06:24.412 "nbd_device": "/dev/nbd1", 00:06:24.412 "bdev_name": "Malloc1" 00:06:24.412 } 00:06:24.412 ]' 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.412 { 00:06:24.412 "nbd_device": "/dev/nbd0", 00:06:24.412 "bdev_name": "Malloc0" 00:06:24.412 }, 00:06:24.412 { 00:06:24.412 "nbd_device": "/dev/nbd1", 00:06:24.412 "bdev_name": "Malloc1" 00:06:24.412 } 00:06:24.412 ]' 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.412 /dev/nbd1' 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.412 /dev/nbd1' 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.412 256+0 records in 00:06:24.412 256+0 records out 00:06:24.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00569448 s, 184 MB/s 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.412 256+0 records in 00:06:24.412 256+0 records out 00:06:24.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173829 s, 60.3 MB/s 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.412 256+0 records in 00:06:24.412 256+0 records out 00:06:24.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172919 s, 60.6 MB/s 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.412 10:12:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.669 10:12:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.669 10:12:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.669 10:12:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.669 10:12:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.669 10:12:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.669 10:12:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.669 10:12:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.669 10:12:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.669 10:12:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.669 10:12:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.927 10:12:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.927 10:12:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.927 10:12:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.927 10:12:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.927 10:12:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.927 10:12:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.927 10:12:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.927 10:12:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.927 10:12:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.927 10:12:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.927 10:12:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.185 10:12:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.185 10:12:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.185 10:12:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.185 10:12:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.185 10:12:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.185 10:12:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.185 10:12:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:25.185 10:12:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.185 10:12:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.185 10:12:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.185 10:12:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.185 10:12:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.185 10:12:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.443 10:12:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.443 [2024-11-29 10:12:04.841413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.443 [2024-11-29 10:12:04.858895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.443 [2024-11-29 10:12:04.858897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.443 [2024-11-29 10:12:04.890825] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.443 [2024-11-29 10:12:04.890874] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.721 spdk_app_start Round 1 00:06:28.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.721 10:12:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.721 10:12:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:28.721 10:12:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70280 /var/tmp/spdk-nbd.sock 00:06:28.721 10:12:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70280 ']' 00:06:28.721 10:12:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.721 10:12:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.721 10:12:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.722 10:12:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.722 10:12:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.722 10:12:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.722 10:12:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:28.722 10:12:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.722 Malloc0 00:06:28.722 10:12:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.980 Malloc1 00:06:28.980 10:12:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.980 10:12:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.980 10:12:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.980 10:12:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.980 10:12:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.980 10:12:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.980 10:12:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.980 10:12:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.980 10:12:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.980 10:12:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.980 10:12:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.980 10:12:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.980 10:12:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.980 10:12:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.980 10:12:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.980 10:12:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.252 /dev/nbd0 00:06:29.252 10:12:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.252 10:12:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.252 10:12:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:29.252 10:12:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:29.252 10:12:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:29.252 10:12:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:29.252 10:12:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:29.252 10:12:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:29.252 10:12:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:29.252 10:12:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:29.252 10:12:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.252 1+0 records in 00:06:29.252 1+0 records out 00:06:29.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233617 s, 17.5 MB/s 00:06:29.252 10:12:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.252 10:12:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:29.252 10:12:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.252 10:12:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.252 10:12:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:29.252 10:12:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.252 10:12:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.252 10:12:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.538 /dev/nbd1 00:06:29.538 10:12:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.538 10:12:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.538 10:12:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:29.538 10:12:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:29.538 10:12:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:29.538 10:12:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:29.538 10:12:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:29.538 10:12:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:29.538 10:12:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:29.538 10:12:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:29.538 10:12:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.538 1+0 records in 00:06:29.538 1+0 records out 00:06:29.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186628 s, 21.9 MB/s 00:06:29.538 10:12:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.538 10:12:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:29.538 10:12:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.538 10:12:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.538 10:12:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:29.538 10:12:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.538 10:12:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.538 10:12:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.538 10:12:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.538 10:12:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.797 { 00:06:29.797 "nbd_device": "/dev/nbd0", 00:06:29.797 "bdev_name": "Malloc0" 00:06:29.797 }, 00:06:29.797 { 00:06:29.797 "nbd_device": "/dev/nbd1", 00:06:29.797 "bdev_name": "Malloc1" 00:06:29.797 } 00:06:29.797 ]' 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.797 { 00:06:29.797 "nbd_device": "/dev/nbd0", 00:06:29.797 "bdev_name": "Malloc0" 00:06:29.797 }, 00:06:29.797 { 00:06:29.797 "nbd_device": "/dev/nbd1", 00:06:29.797 "bdev_name": "Malloc1" 00:06:29.797 } 00:06:29.797 ]' 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.797 /dev/nbd1' 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.797 /dev/nbd1' 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:29.797 256+0 records in 00:06:29.797 256+0 records out 00:06:29.797 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00973609 s, 108 MB/s 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.797 256+0 records in 00:06:29.797 256+0 records out 00:06:29.797 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139451 s, 75.2 MB/s 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.797 256+0 records in 00:06:29.797 256+0 records out 00:06:29.797 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153997 s, 68.1 MB/s 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.797 10:12:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.056 10:12:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.056 10:12:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.056 10:12:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.056 10:12:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.056 10:12:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.056 10:12:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.056 10:12:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.056 10:12:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.056 10:12:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.056 10:12:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.377 10:12:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.377 10:12:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.377 10:12:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.377 10:12:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.377 10:12:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.377 10:12:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.377 10:12:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.377 10:12:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.377 10:12:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.377 10:12:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.377 10:12:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.636 10:12:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.636 10:12:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.636 10:12:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.636 10:12:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.636 10:12:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.636 10:12:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.636 10:12:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:30.636 10:12:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.636 10:12:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.636 10:12:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.636 10:12:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.636 10:12:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.636 10:12:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:30.636 10:12:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:30.636 [2024-11-29 10:12:10.078869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.636 [2024-11-29 10:12:10.095922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.636 [2024-11-29 10:12:10.095933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.894 [2024-11-29 10:12:10.126392] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.894 [2024-11-29 10:12:10.126439] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.173 10:12:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:34.173 spdk_app_start Round 2 00:06:34.173 10:12:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:34.173 10:12:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70280 /var/tmp/spdk-nbd.sock 00:06:34.173 10:12:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70280 ']' 00:06:34.173 10:12:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.173 10:12:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.173 10:12:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.173 10:12:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.173 10:12:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.173 10:12:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.173 10:12:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:34.173 10:12:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.173 Malloc0 00:06:34.173 10:12:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.173 Malloc1 00:06:34.173 10:12:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.173 10:12:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.173 10:12:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.173 10:12:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:34.173 10:12:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.173 10:12:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:34.173 10:12:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.173 10:12:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.173 10:12:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.173 10:12:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.173 10:12:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.173 10:12:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.173 10:12:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:34.173 10:12:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.173 10:12:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.173 10:12:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:34.432 /dev/nbd0 00:06:34.432 10:12:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.432 10:12:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.432 10:12:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:34.432 10:12:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:34.432 10:12:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.432 10:12:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.432 10:12:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:34.432 10:12:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:34.432 10:12:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.432 10:12:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.432 10:12:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.432 1+0 records in 00:06:34.432 1+0 records out 00:06:34.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201561 s, 20.3 MB/s 00:06:34.432 10:12:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.432 10:12:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:34.432 10:12:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.432 10:12:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.432 10:12:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:34.432 10:12:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.432 10:12:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.432 10:12:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:34.690 /dev/nbd1 00:06:34.690 10:12:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:34.690 10:12:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:34.690 10:12:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:34.690 10:12:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:34.690 10:12:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.690 10:12:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.690 10:12:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:34.690 10:12:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:34.690 10:12:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.690 10:12:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.690 10:12:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.690 1+0 records in 00:06:34.690 1+0 records out 00:06:34.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178299 s, 23.0 MB/s 00:06:34.690 10:12:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.690 10:12:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:34.690 10:12:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.690 10:12:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.690 10:12:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:34.690 10:12:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.690 10:12:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.690 10:12:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.690 10:12:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.690 10:12:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:34.949 { 00:06:34.949 "nbd_device": "/dev/nbd0", 00:06:34.949 "bdev_name": "Malloc0" 00:06:34.949 }, 00:06:34.949 { 00:06:34.949 "nbd_device": "/dev/nbd1", 00:06:34.949 "bdev_name": "Malloc1" 00:06:34.949 } 00:06:34.949 ]' 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:34.949 { 00:06:34.949 "nbd_device": "/dev/nbd0", 00:06:34.949 "bdev_name": "Malloc0" 00:06:34.949 }, 00:06:34.949 { 00:06:34.949 "nbd_device": "/dev/nbd1", 00:06:34.949 "bdev_name": "Malloc1" 00:06:34.949 } 00:06:34.949 ]' 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:34.949 /dev/nbd1' 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:34.949 /dev/nbd1' 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:34.949 256+0 records in 00:06:34.949 256+0 records out 00:06:34.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00737682 s, 142 MB/s 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:34.949 256+0 records in 00:06:34.949 256+0 records out 00:06:34.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122595 s, 85.5 MB/s 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:34.949 256+0 records in 00:06:34.949 256+0 records out 00:06:34.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155239 s, 67.5 MB/s 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.949 10:12:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:34.950 10:12:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.950 10:12:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:34.950 10:12:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.950 10:12:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.950 10:12:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.950 10:12:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:34.950 10:12:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.950 10:12:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.208 10:12:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.208 10:12:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.208 10:12:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.208 10:12:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.208 10:12:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.208 10:12:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.208 10:12:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.208 10:12:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.208 10:12:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.208 10:12:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.466 10:12:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.466 10:12:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.466 10:12:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.466 10:12:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.466 10:12:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.466 10:12:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.466 10:12:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.466 10:12:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.466 10:12:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.466 10:12:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.466 10:12:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.725 10:12:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.725 10:12:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.725 10:12:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.725 10:12:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.725 10:12:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.725 10:12:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.725 10:12:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:35.725 10:12:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.725 10:12:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.725 10:12:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:35.725 10:12:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:35.725 10:12:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:35.725 10:12:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:35.725 10:12:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:35.983 [2024-11-29 10:12:15.247778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.983 [2024-11-29 10:12:15.264051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.983 [2024-11-29 10:12:15.264148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.983 [2024-11-29 10:12:15.293270] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.983 [2024-11-29 10:12:15.293317] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:39.265 10:12:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70280 /var/tmp/spdk-nbd.sock 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70280 ']' 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:39.265 10:12:18 event.app_repeat -- event/event.sh@39 -- # killprocess 70280 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70280 ']' 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70280 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70280 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70280' 00:06:39.265 killing process with pid 70280 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70280 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70280 00:06:39.265 spdk_app_start is called in Round 0. 00:06:39.265 Shutdown signal received, stop current app iteration 00:06:39.265 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 reinitialization... 00:06:39.265 spdk_app_start is called in Round 1. 00:06:39.265 Shutdown signal received, stop current app iteration 00:06:39.265 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 reinitialization... 00:06:39.265 spdk_app_start is called in Round 2. 00:06:39.265 Shutdown signal received, stop current app iteration 00:06:39.265 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 reinitialization... 00:06:39.265 spdk_app_start is called in Round 3. 00:06:39.265 Shutdown signal received, stop current app iteration 00:06:39.265 10:12:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:39.265 10:12:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:39.265 00:06:39.265 real 0m16.877s 00:06:39.265 user 0m37.789s 00:06:39.265 sys 0m2.066s 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.265 ************************************ 00:06:39.265 END TEST app_repeat 00:06:39.265 ************************************ 00:06:39.265 10:12:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.265 10:12:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:39.265 10:12:18 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:39.265 10:12:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.265 10:12:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.265 10:12:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.265 ************************************ 00:06:39.265 START TEST cpu_locks 00:06:39.265 ************************************ 00:06:39.265 10:12:18 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:39.265 * Looking for test storage... 00:06:39.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:39.265 10:12:18 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.265 10:12:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.265 10:12:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.265 10:12:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:39.265 10:12:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.266 10:12:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:39.266 10:12:18 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.266 10:12:18 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.266 10:12:18 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.266 10:12:18 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:39.266 10:12:18 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.266 10:12:18 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.266 --rc genhtml_branch_coverage=1 00:06:39.266 --rc genhtml_function_coverage=1 00:06:39.266 --rc genhtml_legend=1 00:06:39.266 --rc geninfo_all_blocks=1 00:06:39.266 --rc geninfo_unexecuted_blocks=1 00:06:39.266 00:06:39.266 ' 00:06:39.266 10:12:18 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.266 --rc genhtml_branch_coverage=1 00:06:39.266 --rc genhtml_function_coverage=1 00:06:39.266 --rc genhtml_legend=1 00:06:39.266 --rc geninfo_all_blocks=1 00:06:39.266 --rc geninfo_unexecuted_blocks=1 00:06:39.266 00:06:39.266 ' 00:06:39.266 10:12:18 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.266 --rc genhtml_branch_coverage=1 00:06:39.266 --rc genhtml_function_coverage=1 00:06:39.266 --rc genhtml_legend=1 00:06:39.266 --rc geninfo_all_blocks=1 00:06:39.266 --rc geninfo_unexecuted_blocks=1 00:06:39.266 00:06:39.266 ' 00:06:39.266 10:12:18 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.266 --rc genhtml_branch_coverage=1 00:06:39.266 --rc genhtml_function_coverage=1 00:06:39.266 --rc genhtml_legend=1 00:06:39.266 --rc geninfo_all_blocks=1 00:06:39.266 --rc geninfo_unexecuted_blocks=1 00:06:39.266 00:06:39.266 ' 00:06:39.266 10:12:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:39.266 10:12:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:39.266 10:12:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:39.266 10:12:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:39.266 10:12:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.266 10:12:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.266 10:12:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.266 ************************************ 00:06:39.266 START TEST default_locks 00:06:39.266 ************************************ 00:06:39.266 10:12:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:39.266 10:12:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70706 00:06:39.266 10:12:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70706 00:06:39.266 10:12:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70706 ']' 00:06:39.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.266 10:12:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.266 10:12:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.266 10:12:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.266 10:12:18 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.266 10:12:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.266 10:12:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.525 [2024-11-29 10:12:18.787784] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:39.525 [2024-11-29 10:12:18.787923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70706 ] 00:06:39.525 [2024-11-29 10:12:18.927622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.525 [2024-11-29 10:12:18.944229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.529 10:12:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.529 10:12:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:40.529 10:12:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70706 00:06:40.529 10:12:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.529 10:12:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70706 00:06:40.529 10:12:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70706 00:06:40.529 10:12:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70706 ']' 00:06:40.529 10:12:19 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70706 00:06:40.529 10:12:19 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:40.529 10:12:19 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.529 10:12:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70706 00:06:40.529 10:12:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.529 10:12:19 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.529 killing process with pid 70706 00:06:40.529 10:12:19 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70706' 00:06:40.529 10:12:19 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70706 00:06:40.529 10:12:19 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70706 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70706 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70706 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70706 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70706 ']' 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.813 ERROR: process (pid: 70706) is no longer running 00:06:40.813 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70706) - No such process 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:40.813 10:12:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:40.813 00:06:40.813 real 0m1.303s 00:06:40.813 user 0m1.306s 00:06:40.814 sys 0m0.390s 00:06:40.814 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.814 10:12:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.814 ************************************ 00:06:40.814 END TEST default_locks 00:06:40.814 ************************************ 00:06:40.814 10:12:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:40.814 10:12:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.814 10:12:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.814 10:12:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.814 ************************************ 00:06:40.814 START TEST default_locks_via_rpc 00:06:40.814 ************************************ 00:06:40.814 10:12:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:40.814 10:12:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70748 00:06:40.814 10:12:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70748 00:06:40.814 10:12:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70748 ']' 00:06:40.814 10:12:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.814 10:12:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.814 10:12:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.814 10:12:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.814 10:12:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.814 10:12:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.814 [2024-11-29 10:12:20.135270] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:40.814 [2024-11-29 10:12:20.135364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70748 ] 00:06:40.814 [2024-11-29 10:12:20.273727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.073 [2024-11-29 10:12:20.290481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70748 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70748 00:06:41.640 10:12:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.899 10:12:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70748 00:06:41.899 10:12:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 70748 ']' 00:06:41.899 10:12:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 70748 00:06:41.899 10:12:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:41.899 10:12:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.899 10:12:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70748 00:06:41.899 10:12:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.899 10:12:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.899 10:12:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70748' 00:06:41.899 killing process with pid 70748 00:06:41.899 10:12:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 70748 00:06:41.899 10:12:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 70748 00:06:42.159 00:06:42.159 real 0m1.320s 00:06:42.159 user 0m1.376s 00:06:42.159 sys 0m0.368s 00:06:42.159 10:12:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.159 10:12:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.159 ************************************ 00:06:42.159 END TEST default_locks_via_rpc 00:06:42.159 ************************************ 00:06:42.159 10:12:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:42.159 10:12:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.159 10:12:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.159 10:12:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.159 ************************************ 00:06:42.159 START TEST non_locking_app_on_locked_coremask 00:06:42.159 ************************************ 00:06:42.159 10:12:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:42.159 10:12:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70794 00:06:42.159 10:12:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70794 /var/tmp/spdk.sock 00:06:42.159 10:12:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70794 ']' 00:06:42.159 10:12:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.159 10:12:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.159 10:12:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.159 10:12:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.159 10:12:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.159 10:12:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.159 [2024-11-29 10:12:21.522146] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:42.159 [2024-11-29 10:12:21.522266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70794 ] 00:06:42.418 [2024-11-29 10:12:21.663257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.418 [2024-11-29 10:12:21.680271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.985 10:12:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.985 10:12:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:42.985 10:12:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:42.985 10:12:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70805 00:06:42.985 10:12:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70805 /var/tmp/spdk2.sock 00:06:42.985 10:12:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70805 ']' 00:06:42.985 10:12:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.985 10:12:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.985 10:12:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.985 10:12:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.985 10:12:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.985 [2024-11-29 10:12:22.415508] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:42.985 [2024-11-29 10:12:22.416039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70805 ] 00:06:43.243 [2024-11-29 10:12:22.566177] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.243 [2024-11-29 10:12:22.566223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.243 [2024-11-29 10:12:22.599189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.178 10:12:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.178 10:12:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:44.178 10:12:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70794 00:06:44.178 10:12:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.178 10:12:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70794 00:06:44.178 10:12:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70794 00:06:44.178 10:12:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70794 ']' 00:06:44.178 10:12:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70794 00:06:44.178 10:12:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:44.178 10:12:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.178 10:12:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70794 00:06:44.178 10:12:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.178 10:12:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.178 10:12:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70794' 00:06:44.178 killing process with pid 70794 00:06:44.178 10:12:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70794 00:06:44.178 10:12:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70794 00:06:44.744 10:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70805 00:06:44.744 10:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70805 ']' 00:06:44.744 10:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70805 00:06:44.744 10:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:44.744 10:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.744 10:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70805 00:06:44.744 10:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.744 killing process with pid 70805 00:06:44.744 10:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.744 10:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70805' 00:06:44.744 10:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70805 00:06:44.744 10:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70805 00:06:45.001 00:06:45.001 real 0m2.845s 00:06:45.001 user 0m3.174s 00:06:45.001 sys 0m0.747s 00:06:45.001 10:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.001 ************************************ 00:06:45.001 END TEST non_locking_app_on_locked_coremask 00:06:45.001 ************************************ 00:06:45.001 10:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.001 10:12:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:45.002 10:12:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.002 10:12:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.002 10:12:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.002 ************************************ 00:06:45.002 START TEST locking_app_on_unlocked_coremask 00:06:45.002 ************************************ 00:06:45.002 10:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:45.002 10:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70863 00:06:45.002 10:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70863 /var/tmp/spdk.sock 00:06:45.002 10:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70863 ']' 00:06:45.002 10:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.002 10:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.002 10:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.002 10:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:45.002 10:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.002 10:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.002 [2024-11-29 10:12:24.415861] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:45.002 [2024-11-29 10:12:24.415981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70863 ] 00:06:45.259 [2024-11-29 10:12:24.556764] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.260 [2024-11-29 10:12:24.556808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.260 [2024-11-29 10:12:24.573868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.824 10:12:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.824 10:12:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:45.824 10:12:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70879 00:06:45.824 10:12:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70879 /var/tmp/spdk2.sock 00:06:45.824 10:12:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70879 ']' 00:06:45.824 10:12:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.824 10:12:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.824 10:12:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.824 10:12:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.824 10:12:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.824 10:12:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:45.824 [2024-11-29 10:12:25.275349] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:45.824 [2024-11-29 10:12:25.275462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70879 ] 00:06:46.080 [2024-11-29 10:12:25.424615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.080 [2024-11-29 10:12:25.457870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.644 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.644 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:46.644 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70879 00:06:46.644 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70879 00:06:46.644 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.208 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70863 00:06:47.208 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70863 ']' 00:06:47.208 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70863 00:06:47.208 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:47.208 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.208 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70863 00:06:47.208 killing process with pid 70863 00:06:47.208 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.208 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.208 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70863' 00:06:47.208 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70863 00:06:47.208 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70863 00:06:47.466 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70879 00:06:47.466 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70879 ']' 00:06:47.466 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70879 00:06:47.466 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:47.466 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.466 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70879 00:06:47.466 killing process with pid 70879 00:06:47.466 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.466 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.466 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70879' 00:06:47.466 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70879 00:06:47.466 10:12:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70879 00:06:47.725 ************************************ 00:06:47.725 END TEST locking_app_on_unlocked_coremask 00:06:47.725 ************************************ 00:06:47.725 00:06:47.725 real 0m2.784s 00:06:47.725 user 0m3.031s 00:06:47.725 sys 0m0.744s 00:06:47.725 10:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.725 10:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.725 10:12:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:47.725 10:12:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.725 10:12:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.725 10:12:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.985 ************************************ 00:06:47.985 START TEST locking_app_on_locked_coremask 00:06:47.985 ************************************ 00:06:47.985 10:12:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:47.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.985 10:12:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70937 00:06:47.985 10:12:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70937 /var/tmp/spdk.sock 00:06:47.985 10:12:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70937 ']' 00:06:47.985 10:12:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.985 10:12:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.985 10:12:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.985 10:12:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.985 10:12:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.985 10:12:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.985 [2024-11-29 10:12:27.292377] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:47.985 [2024-11-29 10:12:27.292549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70937 ] 00:06:47.985 [2024-11-29 10:12:27.442211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.245 [2024-11-29 10:12:27.462030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.815 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70953 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70953 /var/tmp/spdk2.sock 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70953 /var/tmp/spdk2.sock 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:48.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 70953 /var/tmp/spdk2.sock 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70953 ']' 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.816 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.816 [2024-11-29 10:12:28.220976] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:48.816 [2024-11-29 10:12:28.221095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70953 ] 00:06:49.077 [2024-11-29 10:12:28.380732] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70937 has claimed it. 00:06:49.077 [2024-11-29 10:12:28.380795] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.644 ERROR: process (pid: 70953) is no longer running 00:06:49.644 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70953) - No such process 00:06:49.644 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.644 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:49.644 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:49.644 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.644 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.644 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.644 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70937 00:06:49.644 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70937 00:06:49.644 10:12:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.644 10:12:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70937 00:06:49.644 10:12:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70937 ']' 00:06:49.644 10:12:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70937 00:06:49.644 10:12:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:49.644 10:12:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.644 10:12:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70937 00:06:49.644 killing process with pid 70937 00:06:49.644 10:12:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.644 10:12:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.644 10:12:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70937' 00:06:49.644 10:12:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70937 00:06:49.644 10:12:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70937 00:06:49.905 ************************************ 00:06:49.905 END TEST locking_app_on_locked_coremask 00:06:49.905 ************************************ 00:06:49.905 00:06:49.905 real 0m2.091s 00:06:49.905 user 0m2.369s 00:06:49.905 sys 0m0.494s 00:06:49.905 10:12:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.905 10:12:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.905 10:12:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:49.905 10:12:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.905 10:12:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.905 10:12:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.905 ************************************ 00:06:49.905 START TEST locking_overlapped_coremask 00:06:49.905 ************************************ 00:06:49.905 10:12:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:49.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.905 10:12:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70995 00:06:49.905 10:12:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 70995 /var/tmp/spdk.sock 00:06:49.905 10:12:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 70995 ']' 00:06:49.905 10:12:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:49.905 10:12:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.905 10:12:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.905 10:12:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.905 10:12:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.906 10:12:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.250 [2024-11-29 10:12:29.411040] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:50.250 [2024-11-29 10:12:29.411589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70995 ] 00:06:50.250 [2024-11-29 10:12:29.554279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.250 [2024-11-29 10:12:29.576373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.250 [2024-11-29 10:12:29.576741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.250 [2024-11-29 10:12:29.576773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71013 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71013 /var/tmp/spdk2.sock 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71013 /var/tmp/spdk2.sock 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:50.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71013 /var/tmp/spdk2.sock 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71013 ']' 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.853 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.115 [2024-11-29 10:12:30.339330] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:51.115 [2024-11-29 10:12:30.340414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71013 ] 00:06:51.115 [2024-11-29 10:12:30.511371] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70995 has claimed it. 00:06:51.115 [2024-11-29 10:12:30.511442] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:51.686 ERROR: process (pid: 71013) is no longer running 00:06:51.686 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71013) - No such process 00:06:51.686 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.686 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:51.686 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:51.687 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.687 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:51.687 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.687 10:12:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:51.687 10:12:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:51.687 10:12:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:51.687 10:12:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:51.687 10:12:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 70995 00:06:51.687 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 70995 ']' 00:06:51.687 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 70995 00:06:51.687 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:51.687 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.687 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70995 00:06:51.687 10:12:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.687 10:12:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.687 10:12:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70995' 00:06:51.687 killing process with pid 70995 00:06:51.687 10:12:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 70995 00:06:51.687 10:12:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 70995 00:06:51.949 00:06:51.949 real 0m1.941s 00:06:51.949 user 0m5.404s 00:06:51.949 sys 0m0.416s 00:06:51.949 10:12:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.949 10:12:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.949 ************************************ 00:06:51.949 END TEST locking_overlapped_coremask 00:06:51.949 ************************************ 00:06:51.949 10:12:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:51.949 10:12:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.949 10:12:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.949 10:12:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.949 ************************************ 00:06:51.949 START TEST locking_overlapped_coremask_via_rpc 00:06:51.949 ************************************ 00:06:51.949 10:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:51.949 10:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71055 00:06:51.949 10:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71055 /var/tmp/spdk.sock 00:06:51.949 10:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71055 ']' 00:06:51.949 10:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.949 10:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:51.949 10:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.949 10:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.949 10:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.949 10:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.208 [2024-11-29 10:12:31.418352] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:52.208 [2024-11-29 10:12:31.419324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71055 ] 00:06:52.208 [2024-11-29 10:12:31.576406] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.208 [2024-11-29 10:12:31.576493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.208 [2024-11-29 10:12:31.613404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.208 [2024-11-29 10:12:31.613916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.208 [2024-11-29 10:12:31.613940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.142 10:12:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.142 10:12:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.142 10:12:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71073 00:06:53.142 10:12:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71073 /var/tmp/spdk2.sock 00:06:53.142 10:12:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:53.142 10:12:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71073 ']' 00:06:53.142 10:12:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.142 10:12:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.142 10:12:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.142 10:12:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.142 10:12:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.142 [2024-11-29 10:12:32.323381] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:53.142 [2024-11-29 10:12:32.323496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71073 ] 00:06:53.142 [2024-11-29 10:12:32.473269] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.142 [2024-11-29 10:12:32.473314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.142 [2024-11-29 10:12:32.512384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.142 [2024-11-29 10:12:32.512363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.142 [2024-11-29 10:12:32.512461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:53.711 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.711 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.711 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.711 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.711 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.970 [2024-11-29 10:12:33.186917] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71055 has claimed it. 00:06:53.970 request: 00:06:53.970 { 00:06:53.970 "method": "framework_enable_cpumask_locks", 00:06:53.970 "req_id": 1 00:06:53.970 } 00:06:53.970 Got JSON-RPC error response 00:06:53.970 response: 00:06:53.970 { 00:06:53.970 "code": -32603, 00:06:53.970 "message": "Failed to claim CPU core: 2" 00:06:53.970 } 00:06:53.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71055 /var/tmp/spdk.sock 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71055 ']' 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71073 /var/tmp/spdk2.sock 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71073 ']' 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.970 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.229 ************************************ 00:06:54.229 END TEST locking_overlapped_coremask_via_rpc 00:06:54.229 ************************************ 00:06:54.229 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.229 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:54.229 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:54.229 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.229 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.229 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.229 00:06:54.229 real 0m2.278s 00:06:54.229 user 0m1.081s 00:06:54.229 sys 0m0.133s 00:06:54.229 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.229 10:12:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.229 10:12:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:54.229 10:12:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71055 ]] 00:06:54.229 10:12:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71055 00:06:54.229 10:12:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71055 ']' 00:06:54.229 10:12:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71055 00:06:54.229 10:12:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:54.229 10:12:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.229 10:12:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71055 00:06:54.229 killing process with pid 71055 00:06:54.229 10:12:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.229 10:12:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.229 10:12:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71055' 00:06:54.229 10:12:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71055 00:06:54.229 10:12:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71055 00:06:54.490 10:12:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71073 ]] 00:06:54.490 10:12:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71073 00:06:54.490 10:12:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71073 ']' 00:06:54.490 10:12:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71073 00:06:54.490 10:12:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:54.490 10:12:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.490 10:12:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71073 00:06:54.750 killing process with pid 71073 00:06:54.750 10:12:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:54.750 10:12:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:54.750 10:12:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71073' 00:06:54.750 10:12:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71073 00:06:54.750 10:12:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71073 00:06:55.012 10:12:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.012 Process with pid 71055 is not found 00:06:55.012 Process with pid 71073 is not found 00:06:55.012 10:12:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:55.012 10:12:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71055 ]] 00:06:55.012 10:12:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71055 00:06:55.012 10:12:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71055 ']' 00:06:55.012 10:12:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71055 00:06:55.012 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71055) - No such process 00:06:55.012 10:12:34 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71055 is not found' 00:06:55.012 10:12:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71073 ]] 00:06:55.012 10:12:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71073 00:06:55.012 10:12:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71073 ']' 00:06:55.012 10:12:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71073 00:06:55.012 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71073) - No such process 00:06:55.012 10:12:34 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71073 is not found' 00:06:55.012 10:12:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.012 ************************************ 00:06:55.012 END TEST cpu_locks 00:06:55.012 ************************************ 00:06:55.012 00:06:55.012 real 0m15.664s 00:06:55.012 user 0m28.041s 00:06:55.012 sys 0m4.085s 00:06:55.012 10:12:34 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.012 10:12:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.012 ************************************ 00:06:55.012 END TEST event 00:06:55.012 ************************************ 00:06:55.012 00:06:55.012 real 0m41.060s 00:06:55.012 user 1m20.134s 00:06:55.012 sys 0m6.820s 00:06:55.012 10:12:34 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.012 10:12:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.012 10:12:34 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:55.012 10:12:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.012 10:12:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.012 10:12:34 -- common/autotest_common.sh@10 -- # set +x 00:06:55.012 ************************************ 00:06:55.012 START TEST thread 00:06:55.012 ************************************ 00:06:55.012 10:12:34 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:55.012 * Looking for test storage... 00:06:55.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:55.012 10:12:34 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.012 10:12:34 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.012 10:12:34 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.012 10:12:34 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.012 10:12:34 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.013 10:12:34 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.013 10:12:34 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.013 10:12:34 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.013 10:12:34 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.013 10:12:34 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.013 10:12:34 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.013 10:12:34 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.013 10:12:34 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.013 10:12:34 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.013 10:12:34 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.013 10:12:34 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:55.013 10:12:34 thread -- scripts/common.sh@345 -- # : 1 00:06:55.013 10:12:34 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.013 10:12:34 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.013 10:12:34 thread -- scripts/common.sh@365 -- # decimal 1 00:06:55.013 10:12:34 thread -- scripts/common.sh@353 -- # local d=1 00:06:55.013 10:12:34 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.013 10:12:34 thread -- scripts/common.sh@355 -- # echo 1 00:06:55.013 10:12:34 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.013 10:12:34 thread -- scripts/common.sh@366 -- # decimal 2 00:06:55.013 10:12:34 thread -- scripts/common.sh@353 -- # local d=2 00:06:55.013 10:12:34 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.013 10:12:34 thread -- scripts/common.sh@355 -- # echo 2 00:06:55.013 10:12:34 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.013 10:12:34 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.013 10:12:34 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.013 10:12:34 thread -- scripts/common.sh@368 -- # return 0 00:06:55.013 10:12:34 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.013 10:12:34 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.013 --rc genhtml_branch_coverage=1 00:06:55.013 --rc genhtml_function_coverage=1 00:06:55.013 --rc genhtml_legend=1 00:06:55.013 --rc geninfo_all_blocks=1 00:06:55.013 --rc geninfo_unexecuted_blocks=1 00:06:55.013 00:06:55.013 ' 00:06:55.013 10:12:34 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.013 --rc genhtml_branch_coverage=1 00:06:55.013 --rc genhtml_function_coverage=1 00:06:55.013 --rc genhtml_legend=1 00:06:55.013 --rc geninfo_all_blocks=1 00:06:55.013 --rc geninfo_unexecuted_blocks=1 00:06:55.013 00:06:55.013 ' 00:06:55.013 10:12:34 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.013 --rc genhtml_branch_coverage=1 00:06:55.013 --rc genhtml_function_coverage=1 00:06:55.013 --rc genhtml_legend=1 00:06:55.013 --rc geninfo_all_blocks=1 00:06:55.013 --rc geninfo_unexecuted_blocks=1 00:06:55.013 00:06:55.013 ' 00:06:55.013 10:12:34 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.013 --rc genhtml_branch_coverage=1 00:06:55.013 --rc genhtml_function_coverage=1 00:06:55.013 --rc genhtml_legend=1 00:06:55.013 --rc geninfo_all_blocks=1 00:06:55.013 --rc geninfo_unexecuted_blocks=1 00:06:55.013 00:06:55.013 ' 00:06:55.013 10:12:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.013 10:12:34 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:55.013 10:12:34 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.013 10:12:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.275 ************************************ 00:06:55.275 START TEST thread_poller_perf 00:06:55.275 ************************************ 00:06:55.275 10:12:34 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.275 [2024-11-29 10:12:34.511263] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:55.275 [2024-11-29 10:12:34.511373] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71200 ] 00:06:55.275 [2024-11-29 10:12:34.655403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.275 [2024-11-29 10:12:34.675089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.275 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:56.657 [2024-11-29T10:12:36.122Z] ====================================== 00:06:56.657 [2024-11-29T10:12:36.122Z] busy:2613295270 (cyc) 00:06:56.657 [2024-11-29T10:12:36.122Z] total_run_count: 304000 00:06:56.657 [2024-11-29T10:12:36.122Z] tsc_hz: 2600000000 (cyc) 00:06:56.657 [2024-11-29T10:12:36.122Z] ====================================== 00:06:56.657 [2024-11-29T10:12:36.122Z] poller_cost: 8596 (cyc), 3306 (nsec) 00:06:56.657 00:06:56.657 real 0m1.242s 00:06:56.657 user 0m1.087s 00:06:56.657 sys 0m0.047s 00:06:56.657 10:12:35 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.657 10:12:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.657 ************************************ 00:06:56.657 END TEST thread_poller_perf 00:06:56.657 ************************************ 00:06:56.657 10:12:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.657 10:12:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:56.657 10:12:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.657 10:12:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.657 ************************************ 00:06:56.657 START TEST thread_poller_perf 00:06:56.657 ************************************ 00:06:56.657 10:12:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.657 [2024-11-29 10:12:35.812403] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:56.657 [2024-11-29 10:12:35.812923] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71232 ] 00:06:56.657 [2024-11-29 10:12:35.960123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.657 [2024-11-29 10:12:35.979655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.657 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:57.602 [2024-11-29T10:12:37.067Z] ====================================== 00:06:57.602 [2024-11-29T10:12:37.067Z] busy:2603386646 (cyc) 00:06:57.602 [2024-11-29T10:12:37.067Z] total_run_count: 3958000 00:06:57.602 [2024-11-29T10:12:37.067Z] tsc_hz: 2600000000 (cyc) 00:06:57.602 [2024-11-29T10:12:37.067Z] ====================================== 00:06:57.602 [2024-11-29T10:12:37.067Z] poller_cost: 657 (cyc), 252 (nsec) 00:06:57.602 00:06:57.602 real 0m1.234s 00:06:57.602 user 0m1.077s 00:06:57.602 sys 0m0.050s 00:06:57.602 ************************************ 00:06:57.602 END TEST thread_poller_perf 00:06:57.602 ************************************ 00:06:57.602 10:12:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.602 10:12:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.863 10:12:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:57.863 00:06:57.863 real 0m2.742s 00:06:57.863 user 0m2.257s 00:06:57.863 sys 0m0.232s 00:06:57.863 ************************************ 00:06:57.863 END TEST thread 00:06:57.863 ************************************ 00:06:57.863 10:12:37 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.863 10:12:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.863 10:12:37 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:57.863 10:12:37 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:57.863 10:12:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.863 10:12:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.863 10:12:37 -- common/autotest_common.sh@10 -- # set +x 00:06:57.863 ************************************ 00:06:57.863 START TEST app_cmdline 00:06:57.863 ************************************ 00:06:57.863 10:12:37 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:57.863 * Looking for test storage... 00:06:57.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:57.863 10:12:37 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.863 10:12:37 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:57.863 10:12:37 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.863 10:12:37 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.863 10:12:37 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:57.863 10:12:37 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.863 10:12:37 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.863 --rc genhtml_branch_coverage=1 00:06:57.863 --rc genhtml_function_coverage=1 00:06:57.863 --rc genhtml_legend=1 00:06:57.863 --rc geninfo_all_blocks=1 00:06:57.863 --rc geninfo_unexecuted_blocks=1 00:06:57.863 00:06:57.863 ' 00:06:57.863 10:12:37 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.863 --rc genhtml_branch_coverage=1 00:06:57.863 --rc genhtml_function_coverage=1 00:06:57.863 --rc genhtml_legend=1 00:06:57.863 --rc geninfo_all_blocks=1 00:06:57.863 --rc geninfo_unexecuted_blocks=1 00:06:57.863 00:06:57.863 ' 00:06:57.863 10:12:37 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.863 --rc genhtml_branch_coverage=1 00:06:57.863 --rc genhtml_function_coverage=1 00:06:57.863 --rc genhtml_legend=1 00:06:57.863 --rc geninfo_all_blocks=1 00:06:57.863 --rc geninfo_unexecuted_blocks=1 00:06:57.863 00:06:57.863 ' 00:06:57.863 10:12:37 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.863 --rc genhtml_branch_coverage=1 00:06:57.863 --rc genhtml_function_coverage=1 00:06:57.863 --rc genhtml_legend=1 00:06:57.863 --rc geninfo_all_blocks=1 00:06:57.863 --rc geninfo_unexecuted_blocks=1 00:06:57.863 00:06:57.863 ' 00:06:57.863 10:12:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:57.863 10:12:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71320 00:06:57.863 10:12:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71320 00:06:57.864 10:12:37 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71320 ']' 00:06:57.864 10:12:37 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:57.864 10:12:37 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.864 10:12:37 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.864 10:12:37 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.864 10:12:37 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.864 10:12:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.124 [2024-11-29 10:12:37.346793] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:58.124 [2024-11-29 10:12:37.346924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71320 ] 00:06:58.124 [2024-11-29 10:12:37.483827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.124 [2024-11-29 10:12:37.502954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.069 10:12:38 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.069 10:12:38 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:59.069 10:12:38 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:59.069 { 00:06:59.069 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:06:59.069 "fields": { 00:06:59.069 "major": 25, 00:06:59.069 "minor": 1, 00:06:59.069 "patch": 0, 00:06:59.069 "suffix": "-pre", 00:06:59.069 "commit": "35cd3e84d" 00:06:59.069 } 00:06:59.069 } 00:06:59.069 10:12:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:59.069 10:12:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:59.069 10:12:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:59.069 10:12:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:59.069 10:12:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:59.069 10:12:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:59.069 10:12:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:59.069 10:12:38 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.069 10:12:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.069 10:12:38 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.069 10:12:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:59.069 10:12:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:59.069 10:12:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.069 10:12:38 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:59.069 10:12:38 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.069 10:12:38 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.069 10:12:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.069 10:12:38 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.069 10:12:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.069 10:12:38 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.069 10:12:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.069 10:12:38 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.069 10:12:38 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:59.069 10:12:38 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.331 request: 00:06:59.331 { 00:06:59.331 "method": "env_dpdk_get_mem_stats", 00:06:59.331 "req_id": 1 00:06:59.331 } 00:06:59.331 Got JSON-RPC error response 00:06:59.331 response: 00:06:59.331 { 00:06:59.331 "code": -32601, 00:06:59.331 "message": "Method not found" 00:06:59.331 } 00:06:59.331 10:12:38 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:59.331 10:12:38 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:59.331 10:12:38 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:59.331 10:12:38 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:59.331 10:12:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71320 00:06:59.331 10:12:38 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71320 ']' 00:06:59.331 10:12:38 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71320 00:06:59.331 10:12:38 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:59.331 10:12:38 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.331 10:12:38 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71320 00:06:59.331 killing process with pid 71320 00:06:59.331 10:12:38 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.331 10:12:38 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.331 10:12:38 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71320' 00:06:59.331 10:12:38 app_cmdline -- common/autotest_common.sh@973 -- # kill 71320 00:06:59.331 10:12:38 app_cmdline -- common/autotest_common.sh@978 -- # wait 71320 00:06:59.591 ************************************ 00:06:59.591 END TEST app_cmdline 00:06:59.591 ************************************ 00:06:59.591 00:06:59.591 real 0m1.835s 00:06:59.591 user 0m2.179s 00:06:59.592 sys 0m0.401s 00:06:59.592 10:12:38 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.592 10:12:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.592 10:12:39 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:59.592 10:12:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.592 10:12:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.592 10:12:39 -- common/autotest_common.sh@10 -- # set +x 00:06:59.592 ************************************ 00:06:59.592 START TEST version 00:06:59.592 ************************************ 00:06:59.592 10:12:39 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:59.852 * Looking for test storage... 00:06:59.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:59.852 10:12:39 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.852 10:12:39 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.852 10:12:39 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.852 10:12:39 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.852 10:12:39 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.852 10:12:39 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.853 10:12:39 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.853 10:12:39 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.853 10:12:39 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.853 10:12:39 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.853 10:12:39 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.853 10:12:39 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.853 10:12:39 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.853 10:12:39 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.853 10:12:39 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.853 10:12:39 version -- scripts/common.sh@344 -- # case "$op" in 00:06:59.853 10:12:39 version -- scripts/common.sh@345 -- # : 1 00:06:59.853 10:12:39 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.853 10:12:39 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.853 10:12:39 version -- scripts/common.sh@365 -- # decimal 1 00:06:59.853 10:12:39 version -- scripts/common.sh@353 -- # local d=1 00:06:59.853 10:12:39 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.853 10:12:39 version -- scripts/common.sh@355 -- # echo 1 00:06:59.853 10:12:39 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.853 10:12:39 version -- scripts/common.sh@366 -- # decimal 2 00:06:59.853 10:12:39 version -- scripts/common.sh@353 -- # local d=2 00:06:59.853 10:12:39 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.853 10:12:39 version -- scripts/common.sh@355 -- # echo 2 00:06:59.853 10:12:39 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.853 10:12:39 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.853 10:12:39 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.853 10:12:39 version -- scripts/common.sh@368 -- # return 0 00:06:59.853 10:12:39 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.853 10:12:39 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.853 --rc genhtml_branch_coverage=1 00:06:59.853 --rc genhtml_function_coverage=1 00:06:59.853 --rc genhtml_legend=1 00:06:59.853 --rc geninfo_all_blocks=1 00:06:59.853 --rc geninfo_unexecuted_blocks=1 00:06:59.853 00:06:59.853 ' 00:06:59.853 10:12:39 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.853 --rc genhtml_branch_coverage=1 00:06:59.853 --rc genhtml_function_coverage=1 00:06:59.853 --rc genhtml_legend=1 00:06:59.853 --rc geninfo_all_blocks=1 00:06:59.853 --rc geninfo_unexecuted_blocks=1 00:06:59.853 00:06:59.853 ' 00:06:59.853 10:12:39 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.853 --rc genhtml_branch_coverage=1 00:06:59.853 --rc genhtml_function_coverage=1 00:06:59.853 --rc genhtml_legend=1 00:06:59.853 --rc geninfo_all_blocks=1 00:06:59.853 --rc geninfo_unexecuted_blocks=1 00:06:59.853 00:06:59.853 ' 00:06:59.853 10:12:39 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.853 --rc genhtml_branch_coverage=1 00:06:59.853 --rc genhtml_function_coverage=1 00:06:59.853 --rc genhtml_legend=1 00:06:59.853 --rc geninfo_all_blocks=1 00:06:59.853 --rc geninfo_unexecuted_blocks=1 00:06:59.853 00:06:59.853 ' 00:06:59.853 10:12:39 version -- app/version.sh@17 -- # get_header_version major 00:06:59.853 10:12:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.853 10:12:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.853 10:12:39 version -- app/version.sh@14 -- # cut -f2 00:06:59.853 10:12:39 version -- app/version.sh@17 -- # major=25 00:06:59.853 10:12:39 version -- app/version.sh@18 -- # get_header_version minor 00:06:59.853 10:12:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.853 10:12:39 version -- app/version.sh@14 -- # cut -f2 00:06:59.853 10:12:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.853 10:12:39 version -- app/version.sh@18 -- # minor=1 00:06:59.853 10:12:39 version -- app/version.sh@19 -- # get_header_version patch 00:06:59.853 10:12:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.853 10:12:39 version -- app/version.sh@14 -- # cut -f2 00:06:59.853 10:12:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.853 10:12:39 version -- app/version.sh@19 -- # patch=0 00:06:59.853 10:12:39 version -- app/version.sh@20 -- # get_header_version suffix 00:06:59.853 10:12:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.853 10:12:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.853 10:12:39 version -- app/version.sh@14 -- # cut -f2 00:06:59.853 10:12:39 version -- app/version.sh@20 -- # suffix=-pre 00:06:59.853 10:12:39 version -- app/version.sh@22 -- # version=25.1 00:06:59.853 10:12:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:59.853 10:12:39 version -- app/version.sh@28 -- # version=25.1rc0 00:06:59.853 10:12:39 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:59.853 10:12:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:59.853 10:12:39 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:59.853 10:12:39 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:59.853 ************************************ 00:06:59.853 END TEST version 00:06:59.853 ************************************ 00:06:59.853 00:06:59.853 real 0m0.188s 00:06:59.853 user 0m0.117s 00:06:59.853 sys 0m0.092s 00:06:59.853 10:12:39 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.853 10:12:39 version -- common/autotest_common.sh@10 -- # set +x 00:06:59.853 10:12:39 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:59.853 10:12:39 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:59.853 10:12:39 -- spdk/autotest.sh@194 -- # uname -s 00:06:59.853 10:12:39 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:59.853 10:12:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:59.853 10:12:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:59.853 10:12:39 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:59.853 10:12:39 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:59.853 10:12:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.853 10:12:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.853 10:12:39 -- common/autotest_common.sh@10 -- # set +x 00:06:59.853 ************************************ 00:06:59.853 START TEST blockdev_nvme 00:06:59.853 ************************************ 00:06:59.853 10:12:39 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:59.853 * Looking for test storage... 00:07:00.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:00.146 10:12:39 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.146 10:12:39 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.146 10:12:39 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.146 10:12:39 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.146 10:12:39 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:00.146 10:12:39 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.146 10:12:39 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.146 --rc genhtml_branch_coverage=1 00:07:00.146 --rc genhtml_function_coverage=1 00:07:00.146 --rc genhtml_legend=1 00:07:00.146 --rc geninfo_all_blocks=1 00:07:00.146 --rc geninfo_unexecuted_blocks=1 00:07:00.146 00:07:00.146 ' 00:07:00.146 10:12:39 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.146 --rc genhtml_branch_coverage=1 00:07:00.146 --rc genhtml_function_coverage=1 00:07:00.146 --rc genhtml_legend=1 00:07:00.146 --rc geninfo_all_blocks=1 00:07:00.146 --rc geninfo_unexecuted_blocks=1 00:07:00.146 00:07:00.146 ' 00:07:00.146 10:12:39 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.146 --rc genhtml_branch_coverage=1 00:07:00.146 --rc genhtml_function_coverage=1 00:07:00.146 --rc genhtml_legend=1 00:07:00.146 --rc geninfo_all_blocks=1 00:07:00.146 --rc geninfo_unexecuted_blocks=1 00:07:00.146 00:07:00.146 ' 00:07:00.146 10:12:39 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.147 --rc genhtml_branch_coverage=1 00:07:00.147 --rc genhtml_function_coverage=1 00:07:00.147 --rc genhtml_legend=1 00:07:00.147 --rc geninfo_all_blocks=1 00:07:00.147 --rc geninfo_unexecuted_blocks=1 00:07:00.147 00:07:00.147 ' 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:00.147 10:12:39 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71481 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:00.147 10:12:39 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71481 00:07:00.147 10:12:39 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71481 ']' 00:07:00.147 10:12:39 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.147 10:12:39 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.147 10:12:39 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.147 10:12:39 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.147 10:12:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.147 [2024-11-29 10:12:39.474402] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:00.147 [2024-11-29 10:12:39.474728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71481 ] 00:07:00.407 [2024-11-29 10:12:39.618035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.407 [2024-11-29 10:12:39.639354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.978 10:12:40 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.978 10:12:40 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:07:00.978 10:12:40 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:00.978 10:12:40 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:07:00.978 10:12:40 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:00.978 10:12:40 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:00.978 10:12:40 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:00.978 10:12:40 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:00.978 10:12:40 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.978 10:12:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.238 10:12:40 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.238 10:12:40 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:01.238 10:12:40 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.238 10:12:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.238 10:12:40 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.238 10:12:40 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:07:01.238 10:12:40 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:01.238 10:12:40 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.238 10:12:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.238 10:12:40 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.238 10:12:40 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:01.238 10:12:40 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.238 10:12:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.500 10:12:40 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.500 10:12:40 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:01.500 10:12:40 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.500 10:12:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.500 10:12:40 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.500 10:12:40 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:01.500 10:12:40 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:01.500 10:12:40 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:01.500 10:12:40 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.500 10:12:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.500 10:12:40 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.500 10:12:40 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:01.500 10:12:40 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:01.501 10:12:40 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "cace9beb-7220-46f5-a93d-fdeec50b83c9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cace9beb-7220-46f5-a93d-fdeec50b83c9",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "82a4d399-959a-4233-a1aa-dbbad26937bd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "82a4d399-959a-4233-a1aa-dbbad26937bd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ddc94b91-a42a-4276-8dd0-4f7ff5b96728"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ddc94b91-a42a-4276-8dd0-4f7ff5b96728",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4e47a9d5-aeb1-4f93-a556-44507b25bd25"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4e47a9d5-aeb1-4f93-a556-44507b25bd25",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "2f894d33-46a0-4538-837f-fb0bcdcedf91"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2f894d33-46a0-4538-837f-fb0bcdcedf91",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "32031785-ac4b-4778-8da8-6e78cd0b044c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "32031785-ac4b-4778-8da8-6e78cd0b044c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:01.501 10:12:40 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:01.501 10:12:40 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:01.501 10:12:40 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:01.501 10:12:40 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 71481 00:07:01.501 10:12:40 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71481 ']' 00:07:01.501 10:12:40 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71481 00:07:01.501 10:12:40 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:07:01.501 10:12:40 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.501 10:12:40 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71481 00:07:01.501 killing process with pid 71481 00:07:01.501 10:12:40 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.501 10:12:40 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.501 10:12:40 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71481' 00:07:01.501 10:12:40 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71481 00:07:01.501 10:12:40 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71481 00:07:01.761 10:12:41 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:01.761 10:12:41 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:01.761 10:12:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:01.761 10:12:41 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.761 10:12:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.761 ************************************ 00:07:01.761 START TEST bdev_hello_world 00:07:01.761 ************************************ 00:07:01.761 10:12:41 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:01.761 [2024-11-29 10:12:41.160846] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:01.761 [2024-11-29 10:12:41.161123] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71554 ] 00:07:02.022 [2024-11-29 10:12:41.307481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.022 [2024-11-29 10:12:41.326124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.283 [2024-11-29 10:12:41.695412] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:02.283 [2024-11-29 10:12:41.695458] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:02.283 [2024-11-29 10:12:41.695477] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:02.283 [2024-11-29 10:12:41.697575] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:02.283 [2024-11-29 10:12:41.698606] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:02.283 [2024-11-29 10:12:41.698635] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:02.283 [2024-11-29 10:12:41.699198] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:02.283 00:07:02.283 [2024-11-29 10:12:41.699221] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:02.545 00:07:02.545 real 0m0.738s 00:07:02.545 user 0m0.494s 00:07:02.545 sys 0m0.141s 00:07:02.545 10:12:41 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.545 10:12:41 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:02.545 ************************************ 00:07:02.545 END TEST bdev_hello_world 00:07:02.545 ************************************ 00:07:02.545 10:12:41 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:02.545 10:12:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:02.545 10:12:41 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.545 10:12:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.545 ************************************ 00:07:02.545 START TEST bdev_bounds 00:07:02.545 ************************************ 00:07:02.545 10:12:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:02.545 10:12:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71574 00:07:02.545 10:12:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:02.545 10:12:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71574' 00:07:02.545 Process bdevio pid: 71574 00:07:02.545 10:12:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71574 00:07:02.545 10:12:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 71574 ']' 00:07:02.545 10:12:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:02.545 10:12:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.545 10:12:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.545 10:12:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.545 10:12:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.545 10:12:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:02.545 [2024-11-29 10:12:41.965537] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:02.545 [2024-11-29 10:12:41.965667] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71574 ] 00:07:02.807 [2024-11-29 10:12:42.112392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.807 [2024-11-29 10:12:42.133245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.807 [2024-11-29 10:12:42.133479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.807 [2024-11-29 10:12:42.133570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.381 10:12:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.381 10:12:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:03.381 10:12:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:03.643 I/O targets: 00:07:03.643 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:03.643 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:03.643 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:03.643 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:03.643 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:03.643 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:03.643 00:07:03.643 00:07:03.643 CUnit - A unit testing framework for C - Version 2.1-3 00:07:03.643 http://cunit.sourceforge.net/ 00:07:03.643 00:07:03.643 00:07:03.643 Suite: bdevio tests on: Nvme3n1 00:07:03.643 Test: blockdev write read block ...passed 00:07:03.643 Test: blockdev write zeroes read block ...passed 00:07:03.643 Test: blockdev write zeroes read no split ...passed 00:07:03.643 Test: blockdev write zeroes read split ...passed 00:07:03.643 Test: blockdev write zeroes read split partial ...passed 00:07:03.643 Test: blockdev reset ...[2024-11-29 10:12:42.914019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:03.643 [2024-11-29 10:12:42.916536] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:03.643 passed 00:07:03.643 Test: blockdev write read 8 blocks ...passed 00:07:03.643 Test: blockdev write read size > 128k ...passed 00:07:03.643 Test: blockdev write read invalid size ...passed 00:07:03.643 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.643 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.643 Test: blockdev write read max offset ...passed 00:07:03.643 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.643 Test: blockdev writev readv 8 blocks ...passed 00:07:03.643 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.643 Test: blockdev writev readv block ...passed 00:07:03.643 Test: blockdev writev readv size > 128k ...passed 00:07:03.643 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.643 Test: blockdev comparev and writev ...[2024-11-29 10:12:42.933045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5a06000 len:0x1000 00:07:03.643 [2024-11-29 10:12:42.933093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:03.643 passed 00:07:03.643 Test: blockdev nvme passthru rw ...passed 00:07:03.643 Test: blockdev nvme passthru vendor specific ...passed 00:07:03.643 Test: blockdev nvme admin passthru ...[2024-11-29 10:12:42.935437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:03.643 [2024-11-29 10:12:42.935476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:03.643 passed 00:07:03.644 Test: blockdev copy ...passed 00:07:03.644 Suite: bdevio tests on: Nvme2n3 00:07:03.644 Test: blockdev write read block ...passed 00:07:03.644 Test: blockdev write zeroes read block ...passed 00:07:03.644 Test: blockdev write zeroes read no split ...passed 00:07:03.644 Test: blockdev write zeroes read split ...passed 00:07:03.644 Test: blockdev write zeroes read split partial ...passed 00:07:03.644 Test: blockdev reset ...[2024-11-29 10:12:42.965527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:03.644 [2024-11-29 10:12:42.969257] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:03.644 passed 00:07:03.644 Test: blockdev write read 8 blocks ...passed 00:07:03.644 Test: blockdev write read size > 128k ...passed 00:07:03.644 Test: blockdev write read invalid size ...passed 00:07:03.644 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.644 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.644 Test: blockdev write read max offset ...passed 00:07:03.644 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.644 Test: blockdev writev readv 8 blocks ...passed 00:07:03.644 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.644 Test: blockdev writev readv block ...passed 00:07:03.644 Test: blockdev writev readv size > 128k ...passed 00:07:03.644 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.644 Test: blockdev comparev and writev ...[2024-11-29 10:12:42.984758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0e02000 len:0x1000 00:07:03.644 [2024-11-29 10:12:42.984808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:03.644 passed 00:07:03.644 Test: blockdev nvme passthru rw ...passed 00:07:03.644 Test: blockdev nvme passthru vendor specific ...[2024-11-29 10:12:42.986185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:03.644 [2024-11-29 10:12:42.986217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:03.644 passed 00:07:03.644 Test: blockdev nvme admin passthru ...passed 00:07:03.644 Test: blockdev copy ...passed 00:07:03.644 Suite: bdevio tests on: Nvme2n2 00:07:03.644 Test: blockdev write read block ...passed 00:07:03.644 Test: blockdev write zeroes read block ...passed 00:07:03.644 Test: blockdev write zeroes read no split ...passed 00:07:03.644 Test: blockdev write zeroes read split ...passed 00:07:03.644 Test: blockdev write zeroes read split partial ...passed 00:07:03.644 Test: blockdev reset ...[2024-11-29 10:12:43.015509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:03.644 [2024-11-29 10:12:43.018073] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:03.644 passed 00:07:03.644 Test: blockdev write read 8 blocks ...passed 00:07:03.644 Test: blockdev write read size > 128k ...passed 00:07:03.644 Test: blockdev write read invalid size ...passed 00:07:03.644 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.644 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.644 Test: blockdev write read max offset ...passed 00:07:03.644 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.644 Test: blockdev writev readv 8 blocks ...passed 00:07:03.644 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.644 Test: blockdev writev readv block ...passed 00:07:03.644 Test: blockdev writev readv size > 128k ...passed 00:07:03.644 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.644 Test: blockdev comparev and writev ...[2024-11-29 10:12:43.032486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6e3b000 len:0x1000 00:07:03.644 [2024-11-29 10:12:43.032526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:03.644 passed 00:07:03.644 Test: blockdev nvme passthru rw ...passed 00:07:03.644 Test: blockdev nvme passthru vendor specific ...[2024-11-29 10:12:43.034659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:03.644 [2024-11-29 10:12:43.034687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:03.644 passed 00:07:03.644 Test: blockdev nvme admin passthru ...passed 00:07:03.644 Test: blockdev copy ...passed 00:07:03.644 Suite: bdevio tests on: Nvme2n1 00:07:03.644 Test: blockdev write read block ...passed 00:07:03.644 Test: blockdev write zeroes read block ...passed 00:07:03.644 Test: blockdev write zeroes read no split ...passed 00:07:03.644 Test: blockdev write zeroes read split ...passed 00:07:03.644 Test: blockdev write zeroes read split partial ...passed 00:07:03.644 Test: blockdev reset ...[2024-11-29 10:12:43.054018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:03.644 [2024-11-29 10:12:43.057374] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:03.644 passed 00:07:03.644 Test: blockdev write read 8 blocks ...passed 00:07:03.644 Test: blockdev write read size > 128k ...passed 00:07:03.644 Test: blockdev write read invalid size ...passed 00:07:03.644 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.644 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.644 Test: blockdev write read max offset ...passed 00:07:03.644 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.644 Test: blockdev writev readv 8 blocks ...passed 00:07:03.644 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.644 Test: blockdev writev readv block ...passed 00:07:03.644 Test: blockdev writev readv size > 128k ...passed 00:07:03.644 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.644 Test: blockdev comparev and writev ...[2024-11-29 10:12:43.073781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6e37000 len:0x1000 00:07:03.644 [2024-11-29 10:12:43.073912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:03.644 passed 00:07:03.644 Test: blockdev nvme passthru rw ...passed 00:07:03.644 Test: blockdev nvme passthru vendor specific ...[2024-11-29 10:12:43.077117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:03.644 [2024-11-29 10:12:43.077204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:03.644 passed 00:07:03.644 Test: blockdev nvme admin passthru ...passed 00:07:03.644 Test: blockdev copy ...passed 00:07:03.644 Suite: bdevio tests on: Nvme1n1 00:07:03.644 Test: blockdev write read block ...passed 00:07:03.644 Test: blockdev write zeroes read block ...passed 00:07:03.644 Test: blockdev write zeroes read no split ...passed 00:07:03.644 Test: blockdev write zeroes read split ...passed 00:07:03.644 Test: blockdev write zeroes read split partial ...passed 00:07:03.644 Test: blockdev reset ...[2024-11-29 10:12:43.104084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:03.905 passed 00:07:03.905 Test: blockdev write read 8 blocks ...[2024-11-29 10:12:43.106329] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:03.905 passed 00:07:03.905 Test: blockdev write read size > 128k ...passed 00:07:03.905 Test: blockdev write read invalid size ...passed 00:07:03.905 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.905 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.905 Test: blockdev write read max offset ...passed 00:07:03.905 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.905 Test: blockdev writev readv 8 blocks ...passed 00:07:03.905 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.905 Test: blockdev writev readv block ...passed 00:07:03.905 Test: blockdev writev readv size > 128k ...passed 00:07:03.905 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.905 Test: blockdev comparev and writev ...[2024-11-29 10:12:43.121787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6e33000 len:0x1000 00:07:03.905 [2024-11-29 10:12:43.121852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:03.905 passed 00:07:03.905 Test: blockdev nvme passthru rw ...passed 00:07:03.905 Test: blockdev nvme passthru vendor specific ...passed 00:07:03.905 Test: blockdev nvme admin passthru ...[2024-11-29 10:12:43.124312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:03.905 [2024-11-29 10:12:43.124344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:03.905 passed 00:07:03.905 Test: blockdev copy ...passed 00:07:03.905 Suite: bdevio tests on: Nvme0n1 00:07:03.905 Test: blockdev write read block ...passed 00:07:03.905 Test: blockdev write zeroes read block ...passed 00:07:03.905 Test: blockdev write zeroes read no split ...passed 00:07:03.905 Test: blockdev write zeroes read split ...passed 00:07:03.905 Test: blockdev write zeroes read split partial ...passed 00:07:03.905 Test: blockdev reset ...[2024-11-29 10:12:43.156865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:03.905 passed 00:07:03.905 Test: blockdev write read 8 blocks ...[2024-11-29 10:12:43.158622] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:03.905 passed 00:07:03.905 Test: blockdev write read size > 128k ...passed 00:07:03.905 Test: blockdev write read invalid size ...passed 00:07:03.905 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.905 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.905 Test: blockdev write read max offset ...passed 00:07:03.905 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.905 Test: blockdev writev readv 8 blocks ...passed 00:07:03.905 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.905 Test: blockdev writev readv block ...passed 00:07:03.905 Test: blockdev writev readv size > 128k ...passed 00:07:03.905 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.905 Test: blockdev comparev and writev ...passed 00:07:03.905 Test: blockdev nvme passthru rw ...[2024-11-29 10:12:43.171481] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:03.905 separate metadata which is not supported yet. 00:07:03.905 passed 00:07:03.906 Test: blockdev nvme passthru vendor specific ...[2024-11-29 10:12:43.172936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:03.906 [2024-11-29 10:12:43.172975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:03.906 passed 00:07:03.906 Test: blockdev nvme admin passthru ...passed 00:07:03.906 Test: blockdev copy ...passed 00:07:03.906 00:07:03.906 Run Summary: Type Total Ran Passed Failed Inactive 00:07:03.906 suites 6 6 n/a 0 0 00:07:03.906 tests 138 138 138 0 0 00:07:03.906 asserts 893 893 893 0 n/a 00:07:03.906 00:07:03.906 Elapsed time = 0.617 seconds 00:07:03.906 0 00:07:03.906 10:12:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71574 00:07:03.906 10:12:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 71574 ']' 00:07:03.906 10:12:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 71574 00:07:03.906 10:12:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:03.906 10:12:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.906 10:12:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71574 00:07:03.906 killing process with pid 71574 00:07:03.906 10:12:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.906 10:12:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.906 10:12:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71574' 00:07:03.906 10:12:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 71574 00:07:03.906 10:12:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 71574 00:07:03.906 10:12:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:03.906 00:07:03.906 real 0m1.440s 00:07:03.906 user 0m3.638s 00:07:03.906 sys 0m0.254s 00:07:03.906 10:12:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.906 10:12:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:03.906 ************************************ 00:07:03.906 END TEST bdev_bounds 00:07:03.906 ************************************ 00:07:04.165 10:12:43 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:04.165 10:12:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:04.166 10:12:43 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.166 10:12:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.166 ************************************ 00:07:04.166 START TEST bdev_nbd 00:07:04.166 ************************************ 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71628 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71628 /var/tmp/spdk-nbd.sock 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 71628 ']' 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.166 10:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:04.166 [2024-11-29 10:12:43.471043] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:04.166 [2024-11-29 10:12:43.471166] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.166 [2024-11-29 10:12:43.611268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.426 [2024-11-29 10:12:43.630762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.998 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.998 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:04.998 10:12:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:04.998 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.998 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.998 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:04.998 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:04.998 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.998 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.998 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:04.998 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:04.998 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:04.998 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:04.998 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:04.998 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.259 1+0 records in 00:07:05.259 1+0 records out 00:07:05.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679307 s, 6.0 MB/s 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:05.259 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.520 1+0 records in 00:07:05.520 1+0 records out 00:07:05.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000955737 s, 4.3 MB/s 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:05.520 10:12:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:05.783 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:05.783 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:05.783 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:05.783 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:05.783 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.783 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.783 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.783 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:05.783 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.783 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.783 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.783 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.783 1+0 records in 00:07:05.783 1+0 records out 00:07:05.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516885 s, 7.9 MB/s 00:07:05.784 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.784 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.784 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.784 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.784 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.784 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.784 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:05.784 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.044 1+0 records in 00:07:06.044 1+0 records out 00:07:06.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000885109 s, 4.6 MB/s 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:06.044 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.305 1+0 records in 00:07:06.305 1+0 records out 00:07:06.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00283518 s, 1.4 MB/s 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:06.305 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.567 1+0 records in 00:07:06.567 1+0 records out 00:07:06.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000996184 s, 4.1 MB/s 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:06.567 10:12:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.567 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:06.567 { 00:07:06.567 "nbd_device": "/dev/nbd0", 00:07:06.567 "bdev_name": "Nvme0n1" 00:07:06.567 }, 00:07:06.567 { 00:07:06.567 "nbd_device": "/dev/nbd1", 00:07:06.567 "bdev_name": "Nvme1n1" 00:07:06.567 }, 00:07:06.567 { 00:07:06.567 "nbd_device": "/dev/nbd2", 00:07:06.567 "bdev_name": "Nvme2n1" 00:07:06.567 }, 00:07:06.567 { 00:07:06.567 "nbd_device": "/dev/nbd3", 00:07:06.567 "bdev_name": "Nvme2n2" 00:07:06.567 }, 00:07:06.567 { 00:07:06.567 "nbd_device": "/dev/nbd4", 00:07:06.567 "bdev_name": "Nvme2n3" 00:07:06.567 }, 00:07:06.567 { 00:07:06.567 "nbd_device": "/dev/nbd5", 00:07:06.567 "bdev_name": "Nvme3n1" 00:07:06.567 } 00:07:06.567 ]' 00:07:06.567 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:06.567 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:06.567 { 00:07:06.567 "nbd_device": "/dev/nbd0", 00:07:06.567 "bdev_name": "Nvme0n1" 00:07:06.567 }, 00:07:06.567 { 00:07:06.567 "nbd_device": "/dev/nbd1", 00:07:06.567 "bdev_name": "Nvme1n1" 00:07:06.567 }, 00:07:06.567 { 00:07:06.567 "nbd_device": "/dev/nbd2", 00:07:06.567 "bdev_name": "Nvme2n1" 00:07:06.567 }, 00:07:06.567 { 00:07:06.567 "nbd_device": "/dev/nbd3", 00:07:06.567 "bdev_name": "Nvme2n2" 00:07:06.567 }, 00:07:06.567 { 00:07:06.567 "nbd_device": "/dev/nbd4", 00:07:06.567 "bdev_name": "Nvme2n3" 00:07:06.567 }, 00:07:06.567 { 00:07:06.567 "nbd_device": "/dev/nbd5", 00:07:06.567 "bdev_name": "Nvme3n1" 00:07:06.567 } 00:07:06.567 ]' 00:07:06.567 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.827 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:07.088 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:07.088 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:07.088 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:07.088 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.088 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.088 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:07.088 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.088 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.088 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.088 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:07.348 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:07.348 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:07.348 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:07.348 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.348 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.348 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:07.348 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.348 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.349 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.349 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:07.608 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:07.608 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:07.608 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:07.608 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.608 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.608 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:07.608 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.608 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.608 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.608 10:12:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:07.866 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:07.866 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:07.866 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:07.866 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.866 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.866 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:07.866 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.866 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.866 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.866 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.124 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:08.382 /dev/nbd0 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.383 1+0 records in 00:07:08.383 1+0 records out 00:07:08.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588718 s, 7.0 MB/s 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.383 10:12:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:08.640 /dev/nbd1 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.640 1+0 records in 00:07:08.640 1+0 records out 00:07:08.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291195 s, 14.1 MB/s 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.640 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:08.915 /dev/nbd10 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.915 1+0 records in 00:07:08.915 1+0 records out 00:07:08.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502204 s, 8.2 MB/s 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.915 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:09.217 /dev/nbd11 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.217 1+0 records in 00:07:09.217 1+0 records out 00:07:09.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553631 s, 7.4 MB/s 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:09.217 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:09.475 /dev/nbd12 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.475 1+0 records in 00:07:09.475 1+0 records out 00:07:09.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562879 s, 7.3 MB/s 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:09.475 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:09.733 /dev/nbd13 00:07:09.733 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:09.733 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:09.733 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:09.733 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.733 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.733 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.733 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:09.733 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.734 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.734 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.734 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.734 1+0 records in 00:07:09.734 1+0 records out 00:07:09.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484984 s, 8.4 MB/s 00:07:09.734 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.734 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.734 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.734 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.734 10:12:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.734 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.734 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:09.734 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.734 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.734 10:12:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.993 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:09.993 { 00:07:09.993 "nbd_device": "/dev/nbd0", 00:07:09.993 "bdev_name": "Nvme0n1" 00:07:09.993 }, 00:07:09.993 { 00:07:09.993 "nbd_device": "/dev/nbd1", 00:07:09.993 "bdev_name": "Nvme1n1" 00:07:09.993 }, 00:07:09.993 { 00:07:09.993 "nbd_device": "/dev/nbd10", 00:07:09.993 "bdev_name": "Nvme2n1" 00:07:09.993 }, 00:07:09.993 { 00:07:09.993 "nbd_device": "/dev/nbd11", 00:07:09.993 "bdev_name": "Nvme2n2" 00:07:09.993 }, 00:07:09.993 { 00:07:09.993 "nbd_device": "/dev/nbd12", 00:07:09.993 "bdev_name": "Nvme2n3" 00:07:09.993 }, 00:07:09.993 { 00:07:09.993 "nbd_device": "/dev/nbd13", 00:07:09.993 "bdev_name": "Nvme3n1" 00:07:09.993 } 00:07:09.993 ]' 00:07:09.993 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:09.993 { 00:07:09.993 "nbd_device": "/dev/nbd0", 00:07:09.993 "bdev_name": "Nvme0n1" 00:07:09.993 }, 00:07:09.993 { 00:07:09.993 "nbd_device": "/dev/nbd1", 00:07:09.993 "bdev_name": "Nvme1n1" 00:07:09.993 }, 00:07:09.993 { 00:07:09.993 "nbd_device": "/dev/nbd10", 00:07:09.993 "bdev_name": "Nvme2n1" 00:07:09.993 }, 00:07:09.993 { 00:07:09.993 "nbd_device": "/dev/nbd11", 00:07:09.993 "bdev_name": "Nvme2n2" 00:07:09.993 }, 00:07:09.993 { 00:07:09.993 "nbd_device": "/dev/nbd12", 00:07:09.993 "bdev_name": "Nvme2n3" 00:07:09.993 }, 00:07:09.993 { 00:07:09.993 "nbd_device": "/dev/nbd13", 00:07:09.993 "bdev_name": "Nvme3n1" 00:07:09.993 } 00:07:09.993 ]' 00:07:09.993 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.993 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:09.993 /dev/nbd1 00:07:09.993 /dev/nbd10 00:07:09.993 /dev/nbd11 00:07:09.993 /dev/nbd12 00:07:09.993 /dev/nbd13' 00:07:09.993 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:09.993 /dev/nbd1 00:07:09.993 /dev/nbd10 00:07:09.993 /dev/nbd11 00:07:09.993 /dev/nbd12 00:07:09.993 /dev/nbd13' 00:07:09.993 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.993 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:09.993 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:09.993 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:09.994 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:09.994 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:09.994 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:09.994 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.994 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:09.994 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:09.994 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:09.994 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:09.994 256+0 records in 00:07:09.994 256+0 records out 00:07:09.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00914968 s, 115 MB/s 00:07:09.994 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.994 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:09.994 256+0 records in 00:07:09.994 256+0 records out 00:07:09.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0784859 s, 13.4 MB/s 00:07:09.994 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.994 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:10.254 256+0 records in 00:07:10.254 256+0 records out 00:07:10.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167079 s, 6.3 MB/s 00:07:10.254 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.254 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:10.254 256+0 records in 00:07:10.254 256+0 records out 00:07:10.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161256 s, 6.5 MB/s 00:07:10.254 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.254 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:10.515 256+0 records in 00:07:10.515 256+0 records out 00:07:10.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0969024 s, 10.8 MB/s 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:10.515 256+0 records in 00:07:10.515 256+0 records out 00:07:10.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0875842 s, 12.0 MB/s 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:10.515 256+0 records in 00:07:10.515 256+0 records out 00:07:10.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0834802 s, 12.6 MB/s 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.515 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:10.774 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.774 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:10.774 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.774 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:10.775 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.775 10:12:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.775 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:11.034 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:11.034 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:11.034 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:11.034 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.034 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.034 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:11.034 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.034 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.034 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.034 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:11.292 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:11.292 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:11.292 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:11.292 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.292 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.292 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:11.292 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.292 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.292 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.292 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:11.550 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:11.550 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:11.550 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:11.550 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.550 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.550 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:11.550 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.550 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.550 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.550 10:12:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:11.809 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:11.809 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:11.809 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:11.809 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.809 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.809 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:11.809 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.809 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.809 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.809 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:12.068 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:12.068 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:12.068 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:12.068 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.068 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.068 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:12.068 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.068 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.068 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.068 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.068 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.327 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:12.327 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:12.327 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.327 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:12.327 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.327 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:12.327 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:12.327 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:12.327 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:12.327 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:12.327 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:12.327 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:12.327 10:12:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:12.327 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.327 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:12.327 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:12.327 malloc_lvol_verify 00:07:12.328 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:12.587 c7e73643-0155-422a-bdca-a3710b25c9f6 00:07:12.587 10:12:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:12.847 ebd808b1-4627-4832-9072-f2781ba59dc0 00:07:12.847 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:13.109 /dev/nbd0 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:13.109 mke2fs 1.47.0 (5-Feb-2023) 00:07:13.109 Discarding device blocks: 0/4096 done 00:07:13.109 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:13.109 00:07:13.109 Allocating group tables: 0/1 done 00:07:13.109 Writing inode tables: 0/1 done 00:07:13.109 Creating journal (1024 blocks): done 00:07:13.109 Writing superblocks and filesystem accounting information: 0/1 done 00:07:13.109 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.109 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.110 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.110 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.110 10:12:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.110 10:12:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71628 00:07:13.110 10:12:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 71628 ']' 00:07:13.110 10:12:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 71628 00:07:13.110 10:12:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:13.372 10:12:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.372 10:12:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71628 00:07:13.372 10:12:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.372 10:12:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.372 killing process with pid 71628 00:07:13.372 10:12:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71628' 00:07:13.372 10:12:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 71628 00:07:13.372 10:12:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 71628 00:07:13.372 10:12:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:13.372 00:07:13.372 real 0m9.366s 00:07:13.372 user 0m13.536s 00:07:13.372 sys 0m3.139s 00:07:13.372 10:12:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.372 ************************************ 00:07:13.372 END TEST bdev_nbd 00:07:13.372 ************************************ 00:07:13.372 10:12:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:13.372 10:12:52 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:13.372 10:12:52 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:07:13.372 skipping fio tests on NVMe due to multi-ns failures. 00:07:13.372 10:12:52 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:13.372 10:12:52 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:13.372 10:12:52 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:13.372 10:12:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:13.372 10:12:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.372 10:12:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:13.633 ************************************ 00:07:13.633 START TEST bdev_verify 00:07:13.633 ************************************ 00:07:13.633 10:12:52 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:13.633 [2024-11-29 10:12:52.895736] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:13.633 [2024-11-29 10:12:52.895863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71994 ] 00:07:13.633 [2024-11-29 10:12:53.041290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.633 [2024-11-29 10:12:53.063086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.633 [2024-11-29 10:12:53.063123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.205 Running I/O for 5 seconds... 00:07:16.535 20928.00 IOPS, 81.75 MiB/s [2024-11-29T10:12:56.945Z] 21184.00 IOPS, 82.75 MiB/s [2024-11-29T10:12:57.889Z] 21333.33 IOPS, 83.33 MiB/s [2024-11-29T10:12:58.829Z] 21472.00 IOPS, 83.88 MiB/s [2024-11-29T10:12:58.829Z] 21644.80 IOPS, 84.55 MiB/s 00:07:19.364 Latency(us) 00:07:19.364 [2024-11-29T10:12:58.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.364 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.364 Verification LBA range: start 0x0 length 0xbd0bd 00:07:19.364 Nvme0n1 : 5.06 1771.14 6.92 0.00 0.00 72049.42 14014.62 94775.14 00:07:19.364 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.364 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:19.364 Nvme0n1 : 5.04 1778.51 6.95 0.00 0.00 71705.52 13913.80 80256.39 00:07:19.364 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.364 Verification LBA range: start 0x0 length 0xa0000 00:07:19.364 Nvme1n1 : 5.06 1770.63 6.92 0.00 0.00 71893.67 17442.66 81062.99 00:07:19.364 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.364 Verification LBA range: start 0xa0000 length 0xa0000 00:07:19.364 Nvme1n1 : 5.04 1777.99 6.95 0.00 0.00 71627.59 18047.61 72593.72 00:07:19.364 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.364 Verification LBA range: start 0x0 length 0x80000 00:07:19.364 Nvme2n1 : 5.06 1770.12 6.91 0.00 0.00 71714.44 18652.55 77433.30 00:07:19.364 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.364 Verification LBA range: start 0x80000 length 0x80000 00:07:19.364 Nvme2n1 : 5.07 1793.56 7.01 0.00 0.00 70910.73 8065.97 64527.75 00:07:19.364 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.364 Verification LBA range: start 0x0 length 0x80000 00:07:19.364 Nvme2n2 : 5.06 1769.66 6.91 0.00 0.00 71562.38 18047.61 66140.95 00:07:19.364 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.364 Verification LBA range: start 0x80000 length 0x80000 00:07:19.364 Nvme2n2 : 5.07 1793.06 7.00 0.00 0.00 70779.93 8116.38 58478.28 00:07:19.364 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.364 Verification LBA range: start 0x0 length 0x80000 00:07:19.364 Nvme2n3 : 5.08 1787.34 6.98 0.00 0.00 70780.51 8570.09 59284.87 00:07:19.364 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.364 Verification LBA range: start 0x80000 length 0x80000 00:07:19.365 Nvme2n3 : 5.07 1792.59 7.00 0.00 0.00 70636.74 8620.50 57671.68 00:07:19.365 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.365 Verification LBA range: start 0x0 length 0x20000 00:07:19.365 Nvme3n1 : 5.09 1786.87 6.98 0.00 0.00 70670.66 7259.37 62511.26 00:07:19.365 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.365 Verification LBA range: start 0x20000 length 0x20000 00:07:19.365 Nvme3n1 : 5.07 1792.08 7.00 0.00 0.00 70516.61 8570.09 60091.47 00:07:19.365 [2024-11-29T10:12:58.830Z] =================================================================================================================== 00:07:19.365 [2024-11-29T10:12:58.830Z] Total : 21383.54 83.53 0.00 0.00 71233.65 7259.37 94775.14 00:07:19.932 00:07:19.932 real 0m6.338s 00:07:19.932 user 0m11.990s 00:07:19.932 sys 0m0.205s 00:07:19.932 10:12:59 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.932 ************************************ 00:07:19.932 END TEST bdev_verify 00:07:19.932 ************************************ 00:07:19.932 10:12:59 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:19.932 10:12:59 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:19.932 10:12:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:19.932 10:12:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.932 10:12:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:19.932 ************************************ 00:07:19.932 START TEST bdev_verify_big_io 00:07:19.932 ************************************ 00:07:19.932 10:12:59 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:19.932 [2024-11-29 10:12:59.301720] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:19.932 [2024-11-29 10:12:59.301854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72081 ] 00:07:20.193 [2024-11-29 10:12:59.443022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:20.193 [2024-11-29 10:12:59.462993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.193 [2024-11-29 10:12:59.463027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.450 Running I/O for 5 seconds... 00:07:24.383 624.00 IOPS, 39.00 MiB/s [2024-11-29T10:13:05.742Z] 1097.50 IOPS, 68.59 MiB/s [2024-11-29T10:13:06.001Z] 1982.00 IOPS, 123.87 MiB/s 00:07:26.536 Latency(us) 00:07:26.536 [2024-11-29T10:13:06.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.536 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.536 Verification LBA range: start 0x0 length 0xbd0b 00:07:26.536 Nvme0n1 : 5.76 133.43 8.34 0.00 0.00 935634.05 18350.08 1058255.16 00:07:26.536 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.536 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:26.536 Nvme0n1 : 5.93 102.55 6.41 0.00 0.00 1194766.22 10737.82 1871304.86 00:07:26.536 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.536 Verification LBA range: start 0x0 length 0xa000 00:07:26.536 Nvme1n1 : 5.76 130.45 8.15 0.00 0.00 918305.50 69770.63 877577.45 00:07:26.536 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.536 Verification LBA range: start 0xa000 length 0xa000 00:07:26.536 Nvme1n1 : 5.93 107.92 6.75 0.00 0.00 1082448.03 115343.36 1000180.18 00:07:26.536 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.536 Verification LBA range: start 0x0 length 0x8000 00:07:26.536 Nvme2n1 : 5.76 130.08 8.13 0.00 0.00 890191.97 70980.53 916294.10 00:07:26.536 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.536 Verification LBA range: start 0x8000 length 0x8000 00:07:26.536 Nvme2n1 : 5.95 111.32 6.96 0.00 0.00 1025515.49 81869.59 1329271.73 00:07:26.536 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.536 Verification LBA range: start 0x0 length 0x8000 00:07:26.536 Nvme2n2 : 5.84 135.18 8.45 0.00 0.00 832805.20 84289.38 942105.21 00:07:26.536 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.536 Verification LBA range: start 0x8000 length 0x8000 00:07:26.536 Nvme2n2 : 5.96 115.39 7.21 0.00 0.00 960899.03 12804.73 1793871.56 00:07:26.536 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.536 Verification LBA range: start 0x0 length 0x8000 00:07:26.536 Nvme2n3 : 5.93 147.65 9.23 0.00 0.00 743900.32 22786.36 961463.53 00:07:26.536 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.536 Verification LBA range: start 0x8000 length 0x8000 00:07:26.536 Nvme2n3 : 5.98 118.69 7.42 0.00 0.00 894874.48 17241.01 1806777.11 00:07:26.536 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.536 Verification LBA range: start 0x0 length 0x2000 00:07:26.536 Nvme3n1 : 5.93 161.81 10.11 0.00 0.00 661642.30 211.10 967916.31 00:07:26.536 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.536 Verification LBA range: start 0x2000 length 0x2000 00:07:26.536 Nvme3n1 : 6.09 177.62 11.10 0.00 0.00 586402.17 291.45 2090699.22 00:07:26.536 [2024-11-29T10:13:06.001Z] =================================================================================================================== 00:07:26.536 [2024-11-29T10:13:06.001Z] Total : 1572.11 98.26 0.00 0.00 866990.91 211.10 2090699.22 00:07:27.910 00:07:27.910 real 0m7.977s 00:07:27.910 user 0m15.291s 00:07:27.910 sys 0m0.202s 00:07:27.910 10:13:07 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.910 10:13:07 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:27.910 ************************************ 00:07:27.910 END TEST bdev_verify_big_io 00:07:27.910 ************************************ 00:07:27.910 10:13:07 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:27.910 10:13:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:27.910 10:13:07 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.910 10:13:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.910 ************************************ 00:07:27.910 START TEST bdev_write_zeroes 00:07:27.910 ************************************ 00:07:27.910 10:13:07 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:27.910 [2024-11-29 10:13:07.321664] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:27.910 [2024-11-29 10:13:07.321771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72187 ] 00:07:28.168 [2024-11-29 10:13:07.462481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.168 [2024-11-29 10:13:07.485640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.426 Running I/O for 1 seconds... 00:07:29.797 76800.00 IOPS, 300.00 MiB/s 00:07:29.797 Latency(us) 00:07:29.797 [2024-11-29T10:13:09.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.797 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.797 Nvme0n1 : 1.02 12728.88 49.72 0.00 0.00 10035.84 8116.38 19358.33 00:07:29.797 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.797 Nvme1n1 : 1.02 12713.75 49.66 0.00 0.00 10034.20 8368.44 19055.85 00:07:29.797 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.797 Nvme2n1 : 1.02 12699.23 49.61 0.00 0.00 10024.27 8469.27 18652.55 00:07:29.797 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.797 Nvme2n2 : 1.02 12684.71 49.55 0.00 0.00 9992.45 7713.08 18249.26 00:07:29.797 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.797 Nvme2n3 : 1.03 12670.28 49.49 0.00 0.00 9986.44 6906.49 18249.26 00:07:29.797 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.797 Nvme3n1 : 1.03 12655.96 49.44 0.00 0.00 9979.30 6553.60 19660.80 00:07:29.797 [2024-11-29T10:13:09.262Z] =================================================================================================================== 00:07:29.797 [2024-11-29T10:13:09.262Z] Total : 76152.81 297.47 0.00 0.00 10008.75 6553.60 19660.80 00:07:29.797 00:07:29.797 real 0m1.818s 00:07:29.797 user 0m1.536s 00:07:29.797 sys 0m0.173s 00:07:29.797 10:13:09 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.797 10:13:09 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:29.797 ************************************ 00:07:29.797 END TEST bdev_write_zeroes 00:07:29.797 ************************************ 00:07:29.797 10:13:09 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.797 10:13:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:29.797 10:13:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.797 10:13:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.797 ************************************ 00:07:29.797 START TEST bdev_json_nonenclosed 00:07:29.797 ************************************ 00:07:29.797 10:13:09 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.797 [2024-11-29 10:13:09.184967] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:29.797 [2024-11-29 10:13:09.185091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72223 ] 00:07:30.055 [2024-11-29 10:13:09.328931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.055 [2024-11-29 10:13:09.348122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.055 [2024-11-29 10:13:09.348197] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:30.055 [2024-11-29 10:13:09.348215] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:30.055 [2024-11-29 10:13:09.348228] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.055 00:07:30.055 real 0m0.285s 00:07:30.055 user 0m0.102s 00:07:30.055 sys 0m0.081s 00:07:30.055 10:13:09 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.055 10:13:09 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:30.055 ************************************ 00:07:30.055 END TEST bdev_json_nonenclosed 00:07:30.055 ************************************ 00:07:30.055 10:13:09 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:30.055 10:13:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:30.055 10:13:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.055 10:13:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.055 ************************************ 00:07:30.055 START TEST bdev_json_nonarray 00:07:30.055 ************************************ 00:07:30.055 10:13:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:30.055 [2024-11-29 10:13:09.510223] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:30.055 [2024-11-29 10:13:09.510330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72249 ] 00:07:30.312 [2024-11-29 10:13:09.663270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.312 [2024-11-29 10:13:09.682603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.312 [2024-11-29 10:13:09.682706] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:30.312 [2024-11-29 10:13:09.682729] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:30.312 [2024-11-29 10:13:09.682746] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.312 00:07:30.312 real 0m0.296s 00:07:30.312 user 0m0.111s 00:07:30.312 sys 0m0.083s 00:07:30.312 10:13:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.312 10:13:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:30.312 ************************************ 00:07:30.312 END TEST bdev_json_nonarray 00:07:30.312 ************************************ 00:07:30.570 10:13:09 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:30.570 10:13:09 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:30.570 10:13:09 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:30.570 10:13:09 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:30.570 10:13:09 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:30.570 10:13:09 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:30.570 10:13:09 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:30.570 10:13:09 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:30.570 10:13:09 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:30.570 10:13:09 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:30.570 10:13:09 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:30.570 00:07:30.570 real 0m30.532s 00:07:30.570 user 0m48.708s 00:07:30.570 sys 0m4.975s 00:07:30.570 10:13:09 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.570 10:13:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.570 ************************************ 00:07:30.570 END TEST blockdev_nvme 00:07:30.570 ************************************ 00:07:30.570 10:13:09 -- spdk/autotest.sh@209 -- # uname -s 00:07:30.570 10:13:09 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:30.570 10:13:09 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:30.570 10:13:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.570 10:13:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.570 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:07:30.570 ************************************ 00:07:30.570 START TEST blockdev_nvme_gpt 00:07:30.570 ************************************ 00:07:30.570 10:13:09 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:30.570 * Looking for test storage... 00:07:30.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:30.570 10:13:09 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.570 10:13:09 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.570 10:13:09 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.570 10:13:09 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.570 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.570 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.571 10:13:09 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:30.571 10:13:09 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.571 10:13:09 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.571 --rc genhtml_branch_coverage=1 00:07:30.571 --rc genhtml_function_coverage=1 00:07:30.571 --rc genhtml_legend=1 00:07:30.571 --rc geninfo_all_blocks=1 00:07:30.571 --rc geninfo_unexecuted_blocks=1 00:07:30.571 00:07:30.571 ' 00:07:30.571 10:13:09 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.571 --rc genhtml_branch_coverage=1 00:07:30.571 --rc genhtml_function_coverage=1 00:07:30.571 --rc genhtml_legend=1 00:07:30.571 --rc geninfo_all_blocks=1 00:07:30.571 --rc geninfo_unexecuted_blocks=1 00:07:30.571 00:07:30.571 ' 00:07:30.571 10:13:09 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.571 --rc genhtml_branch_coverage=1 00:07:30.571 --rc genhtml_function_coverage=1 00:07:30.571 --rc genhtml_legend=1 00:07:30.571 --rc geninfo_all_blocks=1 00:07:30.571 --rc geninfo_unexecuted_blocks=1 00:07:30.571 00:07:30.571 ' 00:07:30.571 10:13:09 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.571 --rc genhtml_branch_coverage=1 00:07:30.571 --rc genhtml_function_coverage=1 00:07:30.571 --rc genhtml_legend=1 00:07:30.571 --rc geninfo_all_blocks=1 00:07:30.571 --rc geninfo_unexecuted_blocks=1 00:07:30.571 00:07:30.571 ' 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72322 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72322 00:07:30.571 10:13:09 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72322 ']' 00:07:30.571 10:13:09 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.571 10:13:09 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.571 10:13:09 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.571 10:13:09 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.571 10:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:30.571 10:13:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.829 [2024-11-29 10:13:10.053728] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:30.829 [2024-11-29 10:13:10.053855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72322 ] 00:07:30.829 [2024-11-29 10:13:10.196173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.829 [2024-11-29 10:13:10.215842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.764 10:13:10 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.764 10:13:10 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:31.764 10:13:10 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:31.764 10:13:10 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:31.764 10:13:10 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:31.764 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:32.022 Waiting for block devices as requested 00:07:32.022 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:32.022 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:32.022 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:32.280 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:37.548 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:37.548 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:37.548 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:37.549 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:37.549 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:37.549 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:37.549 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:37.549 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:37.549 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:37.549 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:37.549 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:37.549 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:37.549 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:37.549 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:37.549 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:37.549 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:37.549 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:37.549 10:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:37.549 BYT; 00:07:37.549 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:37.549 BYT; 00:07:37.549 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:37.549 10:13:16 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:37.549 10:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:38.484 The operation has completed successfully. 00:07:38.484 10:13:17 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:39.503 The operation has completed successfully. 00:07:39.503 10:13:18 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:39.762 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:40.330 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.330 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.330 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.330 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.330 10:13:19 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:40.330 10:13:19 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.330 10:13:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.330 [] 00:07:40.330 10:13:19 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.330 10:13:19 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:40.330 10:13:19 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:40.330 10:13:19 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:40.330 10:13:19 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:40.330 10:13:19 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:40.330 10:13:19 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.330 10:13:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.590 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.590 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:40.590 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.590 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.590 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.590 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:40.590 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:40.590 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.590 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.851 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.851 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:40.851 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.851 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.851 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.851 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:40.851 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.851 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.851 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.851 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:40.851 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:40.851 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:40.851 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.851 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.851 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.851 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:40.851 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:40.852 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "69935845-01c5-4cc3-90ea-3f5e78c14e4f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "69935845-01c5-4cc3-90ea-3f5e78c14e4f",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9c2638b6-16f7-4b3e-81c2-79a076975222"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9c2638b6-16f7-4b3e-81c2-79a076975222",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "78152902-3cd1-4052-b651-600f8475632b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "78152902-3cd1-4052-b651-600f8475632b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a67e5144-7265-45f3-bd90-d43c67681138"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a67e5144-7265-45f3-bd90-d43c67681138",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "a1ff782b-be12-4c9d-b1d3-4f6ea6f71384"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a1ff782b-be12-4c9d-b1d3-4f6ea6f71384",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:40.852 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:40.852 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:40.852 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:40.852 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 72322 00:07:40.852 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72322 ']' 00:07:40.852 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72322 00:07:40.852 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:40.852 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.852 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72322 00:07:40.852 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.852 killing process with pid 72322 00:07:40.852 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.852 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72322' 00:07:40.852 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72322 00:07:40.852 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72322 00:07:41.114 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:41.114 10:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:41.114 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:41.114 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.114 10:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.114 ************************************ 00:07:41.114 START TEST bdev_hello_world 00:07:41.114 ************************************ 00:07:41.114 10:13:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:41.114 [2024-11-29 10:13:20.539270] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:41.114 [2024-11-29 10:13:20.539375] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72936 ] 00:07:41.375 [2024-11-29 10:13:20.684482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.375 [2024-11-29 10:13:20.703073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.636 [2024-11-29 10:13:21.072281] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:41.636 [2024-11-29 10:13:21.072327] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:41.636 [2024-11-29 10:13:21.072344] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:41.636 [2024-11-29 10:13:21.074407] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:41.636 [2024-11-29 10:13:21.075307] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:41.636 [2024-11-29 10:13:21.075338] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:41.636 [2024-11-29 10:13:21.075946] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:41.636 00:07:41.636 [2024-11-29 10:13:21.075971] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:41.896 00:07:41.896 real 0m0.736s 00:07:41.896 user 0m0.488s 00:07:41.896 sys 0m0.145s 00:07:41.896 ************************************ 00:07:41.896 END TEST bdev_hello_world 00:07:41.896 ************************************ 00:07:41.896 10:13:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.897 10:13:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:41.897 10:13:21 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:41.897 10:13:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.897 10:13:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.897 10:13:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.897 ************************************ 00:07:41.897 START TEST bdev_bounds 00:07:41.897 ************************************ 00:07:41.897 10:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:41.897 10:13:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72961 00:07:41.897 10:13:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:41.897 Process bdevio pid: 72961 00:07:41.897 10:13:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72961' 00:07:41.897 10:13:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72961 00:07:41.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.897 10:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72961 ']' 00:07:41.897 10:13:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:41.897 10:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.897 10:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.897 10:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.897 10:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.897 10:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:41.897 [2024-11-29 10:13:21.340323] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:41.897 [2024-11-29 10:13:21.340444] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72961 ] 00:07:42.157 [2024-11-29 10:13:21.485239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.157 [2024-11-29 10:13:21.506131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.157 [2024-11-29 10:13:21.506630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.157 [2024-11-29 10:13:21.506680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.729 10:13:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.729 10:13:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:42.729 10:13:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:42.991 I/O targets: 00:07:42.991 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:42.991 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:42.991 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:42.991 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.991 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.991 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.991 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:42.991 00:07:42.991 00:07:42.991 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.991 http://cunit.sourceforge.net/ 00:07:42.991 00:07:42.991 00:07:42.991 Suite: bdevio tests on: Nvme3n1 00:07:42.991 Test: blockdev write read block ...passed 00:07:42.991 Test: blockdev write zeroes read block ...passed 00:07:42.991 Test: blockdev write zeroes read no split ...passed 00:07:42.991 Test: blockdev write zeroes read split ...passed 00:07:42.991 Test: blockdev write zeroes read split partial ...passed 00:07:42.991 Test: blockdev reset ...[2024-11-29 10:13:22.292473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:42.991 passed 00:07:42.991 Test: blockdev write read 8 blocks ...[2024-11-29 10:13:22.297564] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:42.991 passed 00:07:42.991 Test: blockdev write read size > 128k ...passed 00:07:42.991 Test: blockdev write read invalid size ...passed 00:07:42.991 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.991 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.991 Test: blockdev write read max offset ...passed 00:07:42.991 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.991 Test: blockdev writev readv 8 blocks ...passed 00:07:42.991 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.991 Test: blockdev writev readv block ...passed 00:07:42.991 Test: blockdev writev readv size > 128k ...passed 00:07:42.991 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.991 Test: blockdev comparev and writev ...[2024-11-29 10:13:22.306253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bfc0e000 len:0x1000 00:07:42.991 [2024-11-29 10:13:22.306306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.991 passed 00:07:42.991 Test: blockdev nvme passthru rw ...passed 00:07:42.991 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.991 Test: blockdev nvme admin passthru ...[2024-11-29 10:13:22.307278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.991 [2024-11-29 10:13:22.307311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.991 passed 00:07:42.991 Test: blockdev copy ...passed 00:07:42.991 Suite: bdevio tests on: Nvme2n3 00:07:42.991 Test: blockdev write read block ...passed 00:07:42.991 Test: blockdev write zeroes read block ...passed 00:07:42.991 Test: blockdev write zeroes read no split ...passed 00:07:42.991 Test: blockdev write zeroes read split ...passed 00:07:42.991 Test: blockdev write zeroes read split partial ...passed 00:07:42.991 Test: blockdev reset ...[2024-11-29 10:13:22.327662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:42.991 passed 00:07:42.991 Test: blockdev write read 8 blocks ...[2024-11-29 10:13:22.329590] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:42.991 passed 00:07:42.991 Test: blockdev write read size > 128k ...passed 00:07:42.991 Test: blockdev write read invalid size ...passed 00:07:42.991 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.991 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.991 Test: blockdev write read max offset ...passed 00:07:42.991 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.991 Test: blockdev writev readv 8 blocks ...passed 00:07:42.991 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.991 Test: blockdev writev readv block ...passed 00:07:42.991 Test: blockdev writev readv size > 128k ...passed 00:07:42.991 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.991 Test: blockdev comparev and writev ...[2024-11-29 10:13:22.335425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bfc08000 len:0x1000 00:07:42.991 [2024-11-29 10:13:22.335465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.991 passed 00:07:42.991 Test: blockdev nvme passthru rw ...passed 00:07:42.991 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.991 Test: blockdev nvme admin passthru ...[2024-11-29 10:13:22.336353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.991 [2024-11-29 10:13:22.336382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.991 passed 00:07:42.991 Test: blockdev copy ...passed 00:07:42.991 Suite: bdevio tests on: Nvme2n2 00:07:42.991 Test: blockdev write read block ...passed 00:07:42.991 Test: blockdev write zeroes read block ...passed 00:07:42.991 Test: blockdev write zeroes read no split ...passed 00:07:42.991 Test: blockdev write zeroes read split ...passed 00:07:42.991 Test: blockdev write zeroes read split partial ...passed 00:07:42.991 Test: blockdev reset ...[2024-11-29 10:13:22.353506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:42.991 passed 00:07:42.991 Test: blockdev write read 8 blocks ...[2024-11-29 10:13:22.355701] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:42.991 passed 00:07:42.991 Test: blockdev write read size > 128k ...passed 00:07:42.991 Test: blockdev write read invalid size ...passed 00:07:42.991 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.991 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.992 Test: blockdev write read max offset ...passed 00:07:42.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.992 Test: blockdev writev readv 8 blocks ...passed 00:07:42.992 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.992 Test: blockdev writev readv block ...passed 00:07:42.992 Test: blockdev writev readv size > 128k ...passed 00:07:42.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.992 Test: blockdev comparev and writev ...[2024-11-29 10:13:22.364153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bfc02000 len:0x1000 00:07:42.992 [2024-11-29 10:13:22.364191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.992 passed 00:07:42.992 Test: blockdev nvme passthru rw ...passed 00:07:42.992 Test: blockdev nvme passthru vendor specific ...[2024-11-29 10:13:22.364831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:42.992 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:42.992 [2024-11-29 10:13:22.364951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.992 passed 00:07:42.992 Test: blockdev copy ...passed 00:07:42.992 Suite: bdevio tests on: Nvme2n1 00:07:42.992 Test: blockdev write read block ...passed 00:07:42.992 Test: blockdev write zeroes read block ...passed 00:07:42.992 Test: blockdev write zeroes read no split ...passed 00:07:42.992 Test: blockdev write zeroes read split ...passed 00:07:42.992 Test: blockdev write zeroes read split partial ...passed 00:07:42.992 Test: blockdev reset ...[2024-11-29 10:13:22.393190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:42.992 [2024-11-29 10:13:22.399445] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:42.992 Test: blockdev write read 8 blocks ...uccessful. 00:07:42.992 passed 00:07:42.992 Test: blockdev write read size > 128k ...passed 00:07:42.992 Test: blockdev write read invalid size ...passed 00:07:42.992 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.992 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.992 Test: blockdev write read max offset ...passed 00:07:42.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.992 Test: blockdev writev readv 8 blocks ...passed 00:07:42.992 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.992 Test: blockdev writev readv block ...passed 00:07:42.992 Test: blockdev writev readv size > 128k ...passed 00:07:42.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.992 Test: blockdev comparev and writev ...[2024-11-29 10:13:22.413148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cde04000 len:0x1000 00:07:42.992 [2024-11-29 10:13:22.413184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.992 passed 00:07:42.992 Test: blockdev nvme passthru rw ...passed 00:07:42.992 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.992 Test: blockdev nvme admin passthru ...[2024-11-29 10:13:22.414440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.992 [2024-11-29 10:13:22.414471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.992 passed 00:07:42.992 Test: blockdev copy ...passed 00:07:42.992 Suite: bdevio tests on: Nvme1n1p2 00:07:42.992 Test: blockdev write read block ...passed 00:07:42.992 Test: blockdev write zeroes read block ...passed 00:07:42.992 Test: blockdev write zeroes read no split ...passed 00:07:42.992 Test: blockdev write zeroes read split ...passed 00:07:42.992 Test: blockdev write zeroes read split partial ...passed 00:07:42.992 Test: blockdev reset ...[2024-11-29 10:13:22.439301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:42.992 passed 00:07:42.992 Test: blockdev write read 8 blocks ...[2024-11-29 10:13:22.441440] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:42.992 passed 00:07:42.992 Test: blockdev write read size > 128k ...passed 00:07:42.992 Test: blockdev write read invalid size ...passed 00:07:42.992 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.992 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.992 Test: blockdev write read max offset ...passed 00:07:42.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.992 Test: blockdev writev readv 8 blocks ...passed 00:07:42.992 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.992 Test: blockdev writev readv block ...passed 00:07:42.992 Test: blockdev writev readv size > 128k ...passed 00:07:42.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.992 Test: blockdev comparev and writev ...[2024-11-29 10:13:22.448044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 lpassed 00:07:42.992 Test: blockdev nvme passthru rw ...passed 00:07:42.992 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.992 Test: blockdev nvme admin passthru ...passed 00:07:42.992 Test: blockdev copy ...en:1 SGL DATA BLOCK ADDRESS 0x2d523d000 len:0x1000 00:07:42.992 [2024-11-29 10:13:22.448169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.992 passed 00:07:42.992 Suite: bdevio tests on: Nvme1n1p1 00:07:42.992 Test: blockdev write read block ...passed 00:07:42.992 Test: blockdev write zeroes read block ...passed 00:07:43.255 Test: blockdev write zeroes read no split ...passed 00:07:43.255 Test: blockdev write zeroes read split ...passed 00:07:43.255 Test: blockdev write zeroes read split partial ...passed 00:07:43.255 Test: blockdev reset ...[2024-11-29 10:13:22.465675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:43.255 passed 00:07:43.255 Test: blockdev write read 8 blocks ...[2024-11-29 10:13:22.468679] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:43.255 passed 00:07:43.255 Test: blockdev write read size > 128k ...passed 00:07:43.255 Test: blockdev write read invalid size ...passed 00:07:43.255 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.255 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.255 Test: blockdev write read max offset ...passed 00:07:43.255 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.255 Test: blockdev writev readv 8 blocks ...passed 00:07:43.255 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.255 Test: blockdev writev readv block ...passed 00:07:43.255 Test: blockdev writev readv size > 128k ...passed 00:07:43.255 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.255 Test: blockdev comparev and writev ...[2024-11-29 10:13:22.480310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2d5239000 len:0x1000 00:07:43.255 [2024-11-29 10:13:22.480349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:43.255 passed 00:07:43.255 Test: blockdev nvme passthru rw ...passed 00:07:43.255 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.255 Test: blockdev nvme admin passthru ...passed 00:07:43.255 Test: blockdev copy ...passed 00:07:43.255 Suite: bdevio tests on: Nvme0n1 00:07:43.255 Test: blockdev write read block ...passed 00:07:43.255 Test: blockdev write zeroes read block ...passed 00:07:43.255 Test: blockdev write zeroes read no split ...passed 00:07:43.255 Test: blockdev write zeroes read split ...passed 00:07:43.255 Test: blockdev write zeroes read split partial ...passed 00:07:43.255 Test: blockdev reset ...[2024-11-29 10:13:22.499472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:43.255 [2024-11-29 10:13:22.501254] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:07:43.255 Test: blockdev write read 8 blocks ...passed 00:07:43.255 Test: blockdev write read size > 128k ...uccessful. 00:07:43.255 passed 00:07:43.255 Test: blockdev write read invalid size ...passed 00:07:43.255 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.255 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.255 Test: blockdev write read max offset ...passed 00:07:43.255 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.255 Test: blockdev writev readv 8 blocks ...passed 00:07:43.255 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.255 Test: blockdev writev readv block ...passed 00:07:43.255 Test: blockdev writev readv size > 128k ...passed 00:07:43.255 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.255 Test: blockdev comparev and writev ...passed 00:07:43.255 Test: blockdev nvme passthru rw ...[2024-11-29 10:13:22.506204] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:43.255 separate metadata which is not supported yet. 00:07:43.255 passed 00:07:43.255 Test: blockdev nvme passthru vendor specific ...[2024-11-29 10:13:22.506812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:43.255 [2024-11-29 10:13:22.506843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:43.255 passed 00:07:43.255 Test: blockdev nvme admin passthru ...passed 00:07:43.255 Test: blockdev copy ...passed 00:07:43.255 00:07:43.255 Run Summary: Type Total Ran Passed Failed Inactive 00:07:43.255 suites 7 7 n/a 0 0 00:07:43.255 tests 161 161 161 0 0 00:07:43.255 asserts 1025 1025 1025 0 n/a 00:07:43.255 00:07:43.255 Elapsed time = 0.523 seconds 00:07:43.255 0 00:07:43.255 10:13:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72961 00:07:43.255 10:13:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72961 ']' 00:07:43.255 10:13:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72961 00:07:43.255 10:13:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:43.255 10:13:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.255 10:13:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72961 00:07:43.255 10:13:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.255 10:13:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.255 10:13:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72961' 00:07:43.255 killing process with pid 72961 00:07:43.255 10:13:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72961 00:07:43.255 10:13:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72961 00:07:43.255 10:13:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:43.255 00:07:43.255 real 0m1.416s 00:07:43.255 user 0m3.636s 00:07:43.255 sys 0m0.256s 00:07:43.255 10:13:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.255 10:13:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:43.255 ************************************ 00:07:43.255 END TEST bdev_bounds 00:07:43.256 ************************************ 00:07:43.530 10:13:22 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:43.531 10:13:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:43.531 10:13:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.531 10:13:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:43.531 ************************************ 00:07:43.531 START TEST bdev_nbd 00:07:43.531 ************************************ 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73015 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73015 /var/tmp/spdk-nbd.sock 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73015 ']' 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:43.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.531 10:13:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:43.531 [2024-11-29 10:13:22.831270] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:43.531 [2024-11-29 10:13:22.831482] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.531 [2024-11-29 10:13:22.977821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.792 [2024-11-29 10:13:22.996665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.366 10:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.366 10:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:44.366 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:44.366 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.366 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:44.366 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:44.366 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:44.366 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.366 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:44.366 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:44.366 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:44.366 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:44.366 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:44.366 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.366 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.627 1+0 records in 00:07:44.627 1+0 records out 00:07:44.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137282 s, 3.0 MB/s 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.627 10:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.889 1+0 records in 00:07:44.889 1+0 records out 00:07:44.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118175 s, 3.5 MB/s 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.889 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.152 1+0 records in 00:07:45.152 1+0 records out 00:07:45.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705398 s, 5.8 MB/s 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.152 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.413 1+0 records in 00:07:45.413 1+0 records out 00:07:45.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108378 s, 3.8 MB/s 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.413 1+0 records in 00:07:45.413 1+0 records out 00:07:45.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075388 s, 5.4 MB/s 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.413 10:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:45.672 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:45.672 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:45.672 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:45.672 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:45.672 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.672 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.672 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.672 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:45.673 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.673 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.673 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.673 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.673 1+0 records in 00:07:45.673 1+0 records out 00:07:45.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101732 s, 4.0 MB/s 00:07:45.673 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.673 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.673 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.673 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.673 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.673 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.673 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.673 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.938 1+0 records in 00:07:45.938 1+0 records out 00:07:45.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000941058 s, 4.4 MB/s 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.938 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.198 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:46.198 { 00:07:46.198 "nbd_device": "/dev/nbd0", 00:07:46.198 "bdev_name": "Nvme0n1" 00:07:46.198 }, 00:07:46.198 { 00:07:46.198 "nbd_device": "/dev/nbd1", 00:07:46.198 "bdev_name": "Nvme1n1p1" 00:07:46.198 }, 00:07:46.198 { 00:07:46.198 "nbd_device": "/dev/nbd2", 00:07:46.198 "bdev_name": "Nvme1n1p2" 00:07:46.198 }, 00:07:46.198 { 00:07:46.198 "nbd_device": "/dev/nbd3", 00:07:46.198 "bdev_name": "Nvme2n1" 00:07:46.198 }, 00:07:46.198 { 00:07:46.198 "nbd_device": "/dev/nbd4", 00:07:46.198 "bdev_name": "Nvme2n2" 00:07:46.198 }, 00:07:46.198 { 00:07:46.198 "nbd_device": "/dev/nbd5", 00:07:46.198 "bdev_name": "Nvme2n3" 00:07:46.198 }, 00:07:46.198 { 00:07:46.198 "nbd_device": "/dev/nbd6", 00:07:46.198 "bdev_name": "Nvme3n1" 00:07:46.198 } 00:07:46.198 ]' 00:07:46.198 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:46.198 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:46.198 { 00:07:46.198 "nbd_device": "/dev/nbd0", 00:07:46.198 "bdev_name": "Nvme0n1" 00:07:46.198 }, 00:07:46.198 { 00:07:46.198 "nbd_device": "/dev/nbd1", 00:07:46.198 "bdev_name": "Nvme1n1p1" 00:07:46.198 }, 00:07:46.198 { 00:07:46.198 "nbd_device": "/dev/nbd2", 00:07:46.198 "bdev_name": "Nvme1n1p2" 00:07:46.198 }, 00:07:46.198 { 00:07:46.198 "nbd_device": "/dev/nbd3", 00:07:46.198 "bdev_name": "Nvme2n1" 00:07:46.198 }, 00:07:46.198 { 00:07:46.198 "nbd_device": "/dev/nbd4", 00:07:46.198 "bdev_name": "Nvme2n2" 00:07:46.198 }, 00:07:46.198 { 00:07:46.198 "nbd_device": "/dev/nbd5", 00:07:46.198 "bdev_name": "Nvme2n3" 00:07:46.198 }, 00:07:46.198 { 00:07:46.198 "nbd_device": "/dev/nbd6", 00:07:46.198 "bdev_name": "Nvme3n1" 00:07:46.198 } 00:07:46.198 ]' 00:07:46.198 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:46.198 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:46.198 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.198 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:46.198 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:46.198 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:46.198 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.198 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.457 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.457 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.457 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.457 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.457 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.457 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.457 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.457 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.457 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.457 10:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.718 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.718 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.718 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.718 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.718 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.718 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:46.718 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.718 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.718 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.718 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.978 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:47.239 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:47.239 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:47.239 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:47.239 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.239 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.239 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:47.239 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.239 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.239 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.239 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:47.499 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:47.499 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:47.499 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:47.499 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.499 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.499 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:47.499 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.499 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.499 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.499 10:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:47.759 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:47.759 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:47.759 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:47.759 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.759 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.759 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:47.759 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.759 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.759 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:47.759 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.759 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.020 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:48.020 /dev/nbd0 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.279 1+0 records in 00:07:48.279 1+0 records out 00:07:48.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479018 s, 8.6 MB/s 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:48.279 /dev/nbd1 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.279 1+0 records in 00:07:48.279 1+0 records out 00:07:48.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044316 s, 9.2 MB/s 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:48.279 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.280 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.280 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:48.280 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.280 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.280 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:48.538 /dev/nbd10 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.538 1+0 records in 00:07:48.538 1+0 records out 00:07:48.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285307 s, 14.4 MB/s 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.538 10:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:48.797 /dev/nbd11 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.797 1+0 records in 00:07:48.797 1+0 records out 00:07:48.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413902 s, 9.9 MB/s 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.797 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:49.055 /dev/nbd12 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.055 1+0 records in 00:07:49.055 1+0 records out 00:07:49.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512951 s, 8.0 MB/s 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:49.055 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.056 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:49.056 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:49.317 /dev/nbd13 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.317 1+0 records in 00:07:49.317 1+0 records out 00:07:49.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459657 s, 8.9 MB/s 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:49.317 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:49.578 /dev/nbd14 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.578 1+0 records in 00:07:49.578 1+0 records out 00:07:49.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121636 s, 3.4 MB/s 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.578 10:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:49.840 { 00:07:49.840 "nbd_device": "/dev/nbd0", 00:07:49.840 "bdev_name": "Nvme0n1" 00:07:49.840 }, 00:07:49.840 { 00:07:49.840 "nbd_device": "/dev/nbd1", 00:07:49.840 "bdev_name": "Nvme1n1p1" 00:07:49.840 }, 00:07:49.840 { 00:07:49.840 "nbd_device": "/dev/nbd10", 00:07:49.840 "bdev_name": "Nvme1n1p2" 00:07:49.840 }, 00:07:49.840 { 00:07:49.840 "nbd_device": "/dev/nbd11", 00:07:49.840 "bdev_name": "Nvme2n1" 00:07:49.840 }, 00:07:49.840 { 00:07:49.840 "nbd_device": "/dev/nbd12", 00:07:49.840 "bdev_name": "Nvme2n2" 00:07:49.840 }, 00:07:49.840 { 00:07:49.840 "nbd_device": "/dev/nbd13", 00:07:49.840 "bdev_name": "Nvme2n3" 00:07:49.840 }, 00:07:49.840 { 00:07:49.840 "nbd_device": "/dev/nbd14", 00:07:49.840 "bdev_name": "Nvme3n1" 00:07:49.840 } 00:07:49.840 ]' 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:49.840 { 00:07:49.840 "nbd_device": "/dev/nbd0", 00:07:49.840 "bdev_name": "Nvme0n1" 00:07:49.840 }, 00:07:49.840 { 00:07:49.840 "nbd_device": "/dev/nbd1", 00:07:49.840 "bdev_name": "Nvme1n1p1" 00:07:49.840 }, 00:07:49.840 { 00:07:49.840 "nbd_device": "/dev/nbd10", 00:07:49.840 "bdev_name": "Nvme1n1p2" 00:07:49.840 }, 00:07:49.840 { 00:07:49.840 "nbd_device": "/dev/nbd11", 00:07:49.840 "bdev_name": "Nvme2n1" 00:07:49.840 }, 00:07:49.840 { 00:07:49.840 "nbd_device": "/dev/nbd12", 00:07:49.840 "bdev_name": "Nvme2n2" 00:07:49.840 }, 00:07:49.840 { 00:07:49.840 "nbd_device": "/dev/nbd13", 00:07:49.840 "bdev_name": "Nvme2n3" 00:07:49.840 }, 00:07:49.840 { 00:07:49.840 "nbd_device": "/dev/nbd14", 00:07:49.840 "bdev_name": "Nvme3n1" 00:07:49.840 } 00:07:49.840 ]' 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:49.840 /dev/nbd1 00:07:49.840 /dev/nbd10 00:07:49.840 /dev/nbd11 00:07:49.840 /dev/nbd12 00:07:49.840 /dev/nbd13 00:07:49.840 /dev/nbd14' 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:49.840 /dev/nbd1 00:07:49.840 /dev/nbd10 00:07:49.840 /dev/nbd11 00:07:49.840 /dev/nbd12 00:07:49.840 /dev/nbd13 00:07:49.840 /dev/nbd14' 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:49.840 256+0 records in 00:07:49.840 256+0 records out 00:07:49.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00685675 s, 153 MB/s 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:49.840 256+0 records in 00:07:49.840 256+0 records out 00:07:49.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115146 s, 9.1 MB/s 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.840 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:50.101 256+0 records in 00:07:50.101 256+0 records out 00:07:50.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131942 s, 7.9 MB/s 00:07:50.101 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.101 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:50.101 256+0 records in 00:07:50.101 256+0 records out 00:07:50.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.107769 s, 9.7 MB/s 00:07:50.101 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.101 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:50.363 256+0 records in 00:07:50.363 256+0 records out 00:07:50.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.102404 s, 10.2 MB/s 00:07:50.363 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.363 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:50.363 256+0 records in 00:07:50.363 256+0 records out 00:07:50.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.110557 s, 9.5 MB/s 00:07:50.363 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.363 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:50.624 256+0 records in 00:07:50.624 256+0 records out 00:07:50.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125433 s, 8.4 MB/s 00:07:50.624 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.624 10:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:50.624 256+0 records in 00:07:50.624 256+0 records out 00:07:50.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15733 s, 6.7 MB/s 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.624 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:50.882 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:50.882 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:50.882 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:50.882 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.882 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.883 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:50.883 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.883 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.883 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.883 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:51.142 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:51.142 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:51.142 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:51.142 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.142 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.142 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:51.142 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.142 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.142 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.142 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:51.401 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:51.401 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:51.401 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:51.401 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.401 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.401 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:51.401 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.401 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.401 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.401 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:51.659 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:51.659 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:51.659 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:51.659 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.659 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.659 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:51.659 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.659 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.659 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.659 10:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:51.659 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:51.659 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:51.659 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:51.659 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.659 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.659 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:51.659 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.659 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.659 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.659 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:51.916 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:51.916 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:51.916 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:51.916 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.916 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.916 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:51.916 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.916 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.916 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.916 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:52.175 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:52.175 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:52.175 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:52.175 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.175 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.175 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:52.175 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.175 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.175 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:52.175 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.175 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:52.504 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:52.504 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:52.504 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:52.504 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:52.504 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:52.504 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:52.504 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:52.504 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:52.504 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:52.504 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:52.504 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:52.504 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:52.504 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:52.504 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.504 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:52.504 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:52.776 malloc_lvol_verify 00:07:52.776 10:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:52.776 ea968e1b-e789-43a1-ac87-70cdbed50321 00:07:52.776 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:53.034 6293621e-9878-463a-bde9-a10bb30e9012 00:07:53.034 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:53.292 /dev/nbd0 00:07:53.292 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:53.292 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:53.292 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:53.292 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:53.292 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:53.292 mke2fs 1.47.0 (5-Feb-2023) 00:07:53.292 Discarding device blocks: 0/4096 done 00:07:53.292 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:53.292 00:07:53.292 Allocating group tables: 0/1 done 00:07:53.292 Writing inode tables: 0/1 done 00:07:53.292 Creating journal (1024 blocks): done 00:07:53.292 Writing superblocks and filesystem accounting information: 0/1 done 00:07:53.292 00:07:53.292 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:53.292 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.292 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:53.292 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:53.292 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:53.292 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.292 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73015 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73015 ']' 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73015 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73015 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.549 killing process with pid 73015 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73015' 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73015 00:07:53.549 10:13:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73015 00:07:53.807 10:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:53.807 00:07:53.807 real 0m10.303s 00:07:53.807 user 0m14.813s 00:07:53.807 sys 0m3.551s 00:07:53.807 10:13:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.807 ************************************ 00:07:53.807 END TEST bdev_nbd 00:07:53.807 ************************************ 00:07:53.807 10:13:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:53.807 10:13:33 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:53.807 skipping fio tests on NVMe due to multi-ns failures. 00:07:53.807 10:13:33 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:53.807 10:13:33 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:53.807 10:13:33 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:53.807 10:13:33 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:53.807 10:13:33 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:53.807 10:13:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:53.807 10:13:33 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.807 10:13:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.807 ************************************ 00:07:53.807 START TEST bdev_verify 00:07:53.807 ************************************ 00:07:53.807 10:13:33 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:53.807 [2024-11-29 10:13:33.164117] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:53.807 [2024-11-29 10:13:33.164226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73421 ] 00:07:54.064 [2024-11-29 10:13:33.307900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:54.064 [2024-11-29 10:13:33.327990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.064 [2024-11-29 10:13:33.328030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.321 Running I/O for 5 seconds... 00:07:56.641 23872.00 IOPS, 93.25 MiB/s [2024-11-29T10:13:37.044Z] 23584.00 IOPS, 92.12 MiB/s [2024-11-29T10:13:38.427Z] 22698.67 IOPS, 88.67 MiB/s [2024-11-29T10:13:38.993Z] 21952.00 IOPS, 85.75 MiB/s [2024-11-29T10:13:38.993Z] 21926.40 IOPS, 85.65 MiB/s 00:07:59.528 Latency(us) 00:07:59.528 [2024-11-29T10:13:38.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.528 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.528 Verification LBA range: start 0x0 length 0xbd0bd 00:07:59.528 Nvme0n1 : 5.08 1512.02 5.91 0.00 0.00 84469.52 12199.78 82676.18 00:07:59.528 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.528 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:59.528 Nvme0n1 : 5.06 1592.38 6.22 0.00 0.00 80234.02 10788.23 78643.20 00:07:59.528 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.528 Verification LBA range: start 0x0 length 0x4ff80 00:07:59.528 Nvme1n1p1 : 5.08 1511.59 5.90 0.00 0.00 84325.43 12199.78 80256.39 00:07:59.528 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.528 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:59.528 Nvme1n1p1 : 5.07 1592.03 6.22 0.00 0.00 80130.44 10838.65 74206.92 00:07:59.528 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.528 Verification LBA range: start 0x0 length 0x4ff7f 00:07:59.528 Nvme1n1p2 : 5.08 1510.69 5.90 0.00 0.00 84224.83 13913.80 77030.01 00:07:59.528 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.528 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:59.528 Nvme1n1p2 : 5.07 1591.66 6.22 0.00 0.00 80013.21 10637.00 70173.93 00:07:59.528 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.528 Verification LBA range: start 0x0 length 0x80000 00:07:59.528 Nvme2n1 : 5.09 1509.82 5.90 0.00 0.00 84077.08 15930.29 79046.50 00:07:59.528 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.528 Verification LBA range: start 0x80000 length 0x80000 00:07:59.528 Nvme2n1 : 5.07 1591.31 6.22 0.00 0.00 79910.00 10989.88 70980.53 00:07:59.528 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.528 Verification LBA range: start 0x0 length 0x80000 00:07:59.528 Nvme2n2 : 5.09 1508.96 5.89 0.00 0.00 83960.78 15728.64 80256.39 00:07:59.528 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.528 Verification LBA range: start 0x80000 length 0x80000 00:07:59.529 Nvme2n2 : 5.07 1590.97 6.21 0.00 0.00 79791.57 10384.94 75416.81 00:07:59.529 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.529 Verification LBA range: start 0x0 length 0x80000 00:07:59.529 Nvme2n3 : 5.09 1508.10 5.89 0.00 0.00 83823.06 12149.37 78643.20 00:07:59.529 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.529 Verification LBA range: start 0x80000 length 0x80000 00:07:59.529 Nvme2n3 : 5.07 1590.62 6.21 0.00 0.00 79646.28 10384.94 78239.90 00:07:59.529 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.529 Verification LBA range: start 0x0 length 0x20000 00:07:59.529 Nvme3n1 : 5.10 1507.24 5.89 0.00 0.00 83729.47 9729.58 82676.18 00:07:59.529 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.529 Verification LBA range: start 0x20000 length 0x20000 00:07:59.529 Nvme3n1 : 5.07 1590.28 6.21 0.00 0.00 79527.36 10637.00 79046.50 00:07:59.529 [2024-11-29T10:13:38.994Z] =================================================================================================================== 00:07:59.529 [2024-11-29T10:13:38.994Z] Total : 21707.67 84.80 0.00 0.00 81939.07 9729.58 82676.18 00:08:00.093 00:08:00.093 real 0m6.324s 00:08:00.093 user 0m11.997s 00:08:00.093 sys 0m0.183s 00:08:00.093 ************************************ 00:08:00.093 END TEST bdev_verify 00:08:00.093 ************************************ 00:08:00.093 10:13:39 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.093 10:13:39 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:00.093 10:13:39 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:00.093 10:13:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:00.093 10:13:39 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.093 10:13:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:00.093 ************************************ 00:08:00.093 START TEST bdev_verify_big_io 00:08:00.093 ************************************ 00:08:00.093 10:13:39 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:00.093 [2024-11-29 10:13:39.543501] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:00.093 [2024-11-29 10:13:39.543593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73508 ] 00:08:00.351 [2024-11-29 10:13:39.683654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:00.351 [2024-11-29 10:13:39.704570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.351 [2024-11-29 10:13:39.704647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.917 Running I/O for 5 seconds... 00:08:05.366 1183.00 IOPS, 73.94 MiB/s [2024-11-29T10:13:46.205Z] 1629.00 IOPS, 101.81 MiB/s [2024-11-29T10:13:46.466Z] 2792.33 IOPS, 174.52 MiB/s [2024-11-29T10:13:46.466Z] 2581.00 IOPS, 161.31 MiB/s 00:08:07.001 Latency(us) 00:08:07.001 [2024-11-29T10:13:46.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.001 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:07.001 Verification LBA range: start 0x0 length 0xbd0b 00:08:07.001 Nvme0n1 : 5.76 93.06 5.82 0.00 0.00 1302164.39 15728.64 1445421.69 00:08:07.001 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:07.001 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:07.001 Nvme0n1 : 5.89 103.26 6.45 0.00 0.00 1175771.06 15829.46 1703532.70 00:08:07.001 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:07.001 Verification LBA range: start 0x0 length 0x4ff8 00:08:07.001 Nvme1n1p1 : 5.88 97.29 6.08 0.00 0.00 1213418.61 100824.62 1316366.18 00:08:07.001 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:07.001 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:07.001 Nvme1n1p1 : 5.76 133.37 8.34 0.00 0.00 896858.58 85499.27 822728.86 00:08:07.001 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:07.001 Verification LBA range: start 0x0 length 0x4ff7 00:08:07.001 Nvme1n1p2 : 5.93 102.81 6.43 0.00 0.00 1117883.01 29440.79 1329271.73 00:08:07.001 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:07.001 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:07.001 Nvme1n1p2 : 5.76 133.51 8.34 0.00 0.00 870465.97 108890.58 948557.98 00:08:07.001 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:07.001 Verification LBA range: start 0x0 length 0x8000 00:08:07.002 Nvme2n1 : 5.95 107.23 6.70 0.00 0.00 1036093.74 22181.42 1109877.37 00:08:07.002 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:07.002 Verification LBA range: start 0x8000 length 0x8000 00:08:07.002 Nvme2n1 : 5.76 137.07 8.57 0.00 0.00 838195.61 94775.14 877577.45 00:08:07.002 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:07.002 Verification LBA range: start 0x0 length 0x8000 00:08:07.002 Nvme2n2 : 5.95 103.67 6.48 0.00 0.00 1033383.18 15022.87 2206849.18 00:08:07.002 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:07.002 Verification LBA range: start 0x8000 length 0x8000 00:08:07.002 Nvme2n2 : 5.84 142.36 8.90 0.00 0.00 792301.16 77433.30 896935.78 00:08:07.002 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:07.002 Verification LBA range: start 0x0 length 0x8000 00:08:07.002 Nvme2n3 : 6.07 141.34 8.83 0.00 0.00 734627.16 6956.90 2232660.28 00:08:07.002 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:07.002 Verification LBA range: start 0x8000 length 0x8000 00:08:07.002 Nvme2n3 : 5.89 151.76 9.48 0.00 0.00 733352.80 3705.30 961463.53 00:08:07.002 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:07.002 Verification LBA range: start 0x0 length 0x2000 00:08:07.002 Nvme3n1 : 6.25 267.46 16.72 0.00 0.00 375328.60 82.31 2051982.57 00:08:07.002 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:07.002 Verification LBA range: start 0x2000 length 0x2000 00:08:07.002 Nvme3n1 : 5.90 157.37 9.84 0.00 0.00 689644.62 3528.86 961463.53 00:08:07.002 [2024-11-29T10:13:46.467Z] =================================================================================================================== 00:08:07.002 [2024-11-29T10:13:46.467Z] Total : 1871.55 116.97 0.00 0.00 840285.81 82.31 2232660.28 00:08:07.260 00:08:07.260 real 0m7.154s 00:08:07.260 user 0m13.687s 00:08:07.260 sys 0m0.185s 00:08:07.260 10:13:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.260 10:13:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:07.260 ************************************ 00:08:07.260 END TEST bdev_verify_big_io 00:08:07.260 ************************************ 00:08:07.260 10:13:46 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.260 10:13:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:07.260 10:13:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.260 10:13:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:07.260 ************************************ 00:08:07.260 START TEST bdev_write_zeroes 00:08:07.260 ************************************ 00:08:07.260 10:13:46 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.518 [2024-11-29 10:13:46.754255] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:07.518 [2024-11-29 10:13:46.754350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73606 ] 00:08:07.518 [2024-11-29 10:13:46.892135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.518 [2024-11-29 10:13:46.908748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.083 Running I/O for 1 seconds... 00:08:09.014 66304.00 IOPS, 259.00 MiB/s 00:08:09.014 Latency(us) 00:08:09.014 [2024-11-29T10:13:48.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.014 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.014 Nvme0n1 : 1.03 9424.93 36.82 0.00 0.00 13554.76 10889.06 23492.14 00:08:09.014 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.014 Nvme1n1p1 : 1.03 9413.47 36.77 0.00 0.00 13553.34 10737.82 22887.19 00:08:09.014 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.014 Nvme1n1p2 : 1.03 9402.06 36.73 0.00 0.00 13542.91 10737.82 22181.42 00:08:09.014 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.014 Nvme2n1 : 1.03 9391.57 36.69 0.00 0.00 13540.57 10939.47 21576.47 00:08:09.014 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.014 Nvme2n2 : 1.03 9381.09 36.64 0.00 0.00 13534.01 10989.88 21878.94 00:08:09.014 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.014 Nvme2n3 : 1.03 9370.60 36.60 0.00 0.00 13490.17 9527.93 21878.94 00:08:09.014 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.014 Nvme3n1 : 1.03 9360.19 36.56 0.00 0.00 13468.05 7561.85 23391.31 00:08:09.014 [2024-11-29T10:13:48.479Z] =================================================================================================================== 00:08:09.014 [2024-11-29T10:13:48.479Z] Total : 65743.90 256.81 0.00 0.00 13526.26 7561.85 23492.14 00:08:09.272 00:08:09.272 real 0m1.782s 00:08:09.272 user 0m1.532s 00:08:09.272 sys 0m0.140s 00:08:09.272 10:13:48 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.272 ************************************ 00:08:09.272 END TEST bdev_write_zeroes 00:08:09.272 ************************************ 00:08:09.273 10:13:48 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:09.273 10:13:48 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.273 10:13:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:09.273 10:13:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.273 10:13:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:09.273 ************************************ 00:08:09.273 START TEST bdev_json_nonenclosed 00:08:09.273 ************************************ 00:08:09.273 10:13:48 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.273 [2024-11-29 10:13:48.605476] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:09.273 [2024-11-29 10:13:48.605593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73643 ] 00:08:09.533 [2024-11-29 10:13:48.747954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.533 [2024-11-29 10:13:48.766656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.533 [2024-11-29 10:13:48.766738] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:09.533 [2024-11-29 10:13:48.766754] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:09.533 [2024-11-29 10:13:48.766765] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.533 00:08:09.533 real 0m0.283s 00:08:09.533 user 0m0.109s 00:08:09.533 sys 0m0.071s 00:08:09.533 10:13:48 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.533 10:13:48 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:09.533 ************************************ 00:08:09.533 END TEST bdev_json_nonenclosed 00:08:09.533 ************************************ 00:08:09.533 10:13:48 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.533 10:13:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:09.533 10:13:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.533 10:13:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:09.533 ************************************ 00:08:09.533 START TEST bdev_json_nonarray 00:08:09.533 ************************************ 00:08:09.533 10:13:48 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.533 [2024-11-29 10:13:48.950043] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:09.533 [2024-11-29 10:13:48.950150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73668 ] 00:08:09.849 [2024-11-29 10:13:49.095903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.849 [2024-11-29 10:13:49.114509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.849 [2024-11-29 10:13:49.114597] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:09.849 [2024-11-29 10:13:49.114613] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:09.849 [2024-11-29 10:13:49.114625] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.849 00:08:09.849 real 0m0.283s 00:08:09.849 user 0m0.109s 00:08:09.849 sys 0m0.071s 00:08:09.849 ************************************ 00:08:09.849 END TEST bdev_json_nonarray 00:08:09.849 ************************************ 00:08:09.849 10:13:49 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.849 10:13:49 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:09.849 10:13:49 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:08:09.849 10:13:49 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:08:09.849 10:13:49 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:09.849 10:13:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.849 10:13:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.849 10:13:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:09.849 ************************************ 00:08:09.849 START TEST bdev_gpt_uuid 00:08:09.849 ************************************ 00:08:09.849 10:13:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:09.849 10:13:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:08:09.849 10:13:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:08:09.849 10:13:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73694 00:08:09.849 10:13:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:09.849 10:13:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 73694 00:08:09.849 10:13:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 73694 ']' 00:08:09.849 10:13:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.849 10:13:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.849 10:13:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.849 10:13:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.849 10:13:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:09.849 10:13:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:09.849 [2024-11-29 10:13:49.308351] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:09.849 [2024-11-29 10:13:49.308480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73694 ] 00:08:10.108 [2024-11-29 10:13:49.455940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.108 [2024-11-29 10:13:49.474642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.043 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.043 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:11.043 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:11.043 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.043 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:11.043 Some configs were skipped because the RPC state that can call them passed over. 00:08:11.043 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.043 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:08:11.043 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.043 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:11.043 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.043 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:11.043 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.043 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:11.043 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.043 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:08:11.043 { 00:08:11.043 "name": "Nvme1n1p1", 00:08:11.043 "aliases": [ 00:08:11.043 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:11.043 ], 00:08:11.043 "product_name": "GPT Disk", 00:08:11.043 "block_size": 4096, 00:08:11.043 "num_blocks": 655104, 00:08:11.043 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:11.043 "assigned_rate_limits": { 00:08:11.043 "rw_ios_per_sec": 0, 00:08:11.043 "rw_mbytes_per_sec": 0, 00:08:11.043 "r_mbytes_per_sec": 0, 00:08:11.043 "w_mbytes_per_sec": 0 00:08:11.043 }, 00:08:11.043 "claimed": false, 00:08:11.043 "zoned": false, 00:08:11.043 "supported_io_types": { 00:08:11.043 "read": true, 00:08:11.043 "write": true, 00:08:11.043 "unmap": true, 00:08:11.043 "flush": true, 00:08:11.043 "reset": true, 00:08:11.043 "nvme_admin": false, 00:08:11.043 "nvme_io": false, 00:08:11.043 "nvme_io_md": false, 00:08:11.043 "write_zeroes": true, 00:08:11.043 "zcopy": false, 00:08:11.043 "get_zone_info": false, 00:08:11.043 "zone_management": false, 00:08:11.043 "zone_append": false, 00:08:11.043 "compare": true, 00:08:11.043 "compare_and_write": false, 00:08:11.043 "abort": true, 00:08:11.043 "seek_hole": false, 00:08:11.043 "seek_data": false, 00:08:11.043 "copy": true, 00:08:11.043 "nvme_iov_md": false 00:08:11.043 }, 00:08:11.043 "driver_specific": { 00:08:11.043 "gpt": { 00:08:11.043 "base_bdev": "Nvme1n1", 00:08:11.043 "offset_blocks": 256, 00:08:11.043 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:11.043 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:11.043 "partition_name": "SPDK_TEST_first" 00:08:11.043 } 00:08:11.043 } 00:08:11.043 } 00:08:11.043 ]' 00:08:11.043 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:08:11.304 { 00:08:11.304 "name": "Nvme1n1p2", 00:08:11.304 "aliases": [ 00:08:11.304 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:11.304 ], 00:08:11.304 "product_name": "GPT Disk", 00:08:11.304 "block_size": 4096, 00:08:11.304 "num_blocks": 655103, 00:08:11.304 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:11.304 "assigned_rate_limits": { 00:08:11.304 "rw_ios_per_sec": 0, 00:08:11.304 "rw_mbytes_per_sec": 0, 00:08:11.304 "r_mbytes_per_sec": 0, 00:08:11.304 "w_mbytes_per_sec": 0 00:08:11.304 }, 00:08:11.304 "claimed": false, 00:08:11.304 "zoned": false, 00:08:11.304 "supported_io_types": { 00:08:11.304 "read": true, 00:08:11.304 "write": true, 00:08:11.304 "unmap": true, 00:08:11.304 "flush": true, 00:08:11.304 "reset": true, 00:08:11.304 "nvme_admin": false, 00:08:11.304 "nvme_io": false, 00:08:11.304 "nvme_io_md": false, 00:08:11.304 "write_zeroes": true, 00:08:11.304 "zcopy": false, 00:08:11.304 "get_zone_info": false, 00:08:11.304 "zone_management": false, 00:08:11.304 "zone_append": false, 00:08:11.304 "compare": true, 00:08:11.304 "compare_and_write": false, 00:08:11.304 "abort": true, 00:08:11.304 "seek_hole": false, 00:08:11.304 "seek_data": false, 00:08:11.304 "copy": true, 00:08:11.304 "nvme_iov_md": false 00:08:11.304 }, 00:08:11.304 "driver_specific": { 00:08:11.304 "gpt": { 00:08:11.304 "base_bdev": "Nvme1n1", 00:08:11.304 "offset_blocks": 655360, 00:08:11.304 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:11.304 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:11.304 "partition_name": "SPDK_TEST_second" 00:08:11.304 } 00:08:11.304 } 00:08:11.304 } 00:08:11.304 ]' 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 73694 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 73694 ']' 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 73694 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73694 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.304 killing process with pid 73694 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73694' 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 73694 00:08:11.304 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 73694 00:08:11.564 00:08:11.564 real 0m1.744s 00:08:11.564 user 0m1.935s 00:08:11.564 sys 0m0.321s 00:08:11.564 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.564 10:13:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:11.564 ************************************ 00:08:11.564 END TEST bdev_gpt_uuid 00:08:11.564 ************************************ 00:08:11.564 10:13:51 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:08:11.564 10:13:51 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:11.564 10:13:51 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:08:11.564 10:13:51 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:11.822 10:13:51 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:11.822 10:13:51 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:11.822 10:13:51 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:11.822 10:13:51 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:11.822 10:13:51 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:12.082 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:12.082 Waiting for block devices as requested 00:08:12.082 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:12.341 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:12.341 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:12.341 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:17.627 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:17.627 10:13:56 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:17.627 10:13:56 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:17.887 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:17.887 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:17.887 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:17.887 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:17.887 10:13:57 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:17.887 00:08:17.887 real 0m47.266s 00:08:17.887 user 0m59.947s 00:08:17.887 sys 0m7.337s 00:08:17.887 10:13:57 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.887 ************************************ 00:08:17.887 END TEST blockdev_nvme_gpt 00:08:17.887 ************************************ 00:08:17.887 10:13:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:17.887 10:13:57 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:17.887 10:13:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.887 10:13:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.887 10:13:57 -- common/autotest_common.sh@10 -- # set +x 00:08:17.887 ************************************ 00:08:17.887 START TEST nvme 00:08:17.887 ************************************ 00:08:17.887 10:13:57 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:17.887 * Looking for test storage... 00:08:17.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:17.887 10:13:57 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:17.887 10:13:57 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:17.887 10:13:57 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:17.887 10:13:57 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:17.887 10:13:57 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.887 10:13:57 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.887 10:13:57 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.887 10:13:57 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.887 10:13:57 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.887 10:13:57 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.887 10:13:57 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.887 10:13:57 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.887 10:13:57 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.887 10:13:57 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.887 10:13:57 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.887 10:13:57 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:17.887 10:13:57 nvme -- scripts/common.sh@345 -- # : 1 00:08:17.887 10:13:57 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.887 10:13:57 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.887 10:13:57 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:17.887 10:13:57 nvme -- scripts/common.sh@353 -- # local d=1 00:08:17.887 10:13:57 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.887 10:13:57 nvme -- scripts/common.sh@355 -- # echo 1 00:08:17.887 10:13:57 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.887 10:13:57 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:17.887 10:13:57 nvme -- scripts/common.sh@353 -- # local d=2 00:08:17.887 10:13:57 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.887 10:13:57 nvme -- scripts/common.sh@355 -- # echo 2 00:08:17.887 10:13:57 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.887 10:13:57 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.887 10:13:57 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.887 10:13:57 nvme -- scripts/common.sh@368 -- # return 0 00:08:17.887 10:13:57 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.887 10:13:57 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:17.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.887 --rc genhtml_branch_coverage=1 00:08:17.887 --rc genhtml_function_coverage=1 00:08:17.887 --rc genhtml_legend=1 00:08:17.887 --rc geninfo_all_blocks=1 00:08:17.887 --rc geninfo_unexecuted_blocks=1 00:08:17.887 00:08:17.887 ' 00:08:17.887 10:13:57 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:17.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.887 --rc genhtml_branch_coverage=1 00:08:17.887 --rc genhtml_function_coverage=1 00:08:17.887 --rc genhtml_legend=1 00:08:17.887 --rc geninfo_all_blocks=1 00:08:17.887 --rc geninfo_unexecuted_blocks=1 00:08:17.887 00:08:17.887 ' 00:08:17.887 10:13:57 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:17.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.887 --rc genhtml_branch_coverage=1 00:08:17.887 --rc genhtml_function_coverage=1 00:08:17.887 --rc genhtml_legend=1 00:08:17.887 --rc geninfo_all_blocks=1 00:08:17.887 --rc geninfo_unexecuted_blocks=1 00:08:17.887 00:08:17.887 ' 00:08:17.887 10:13:57 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:17.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.887 --rc genhtml_branch_coverage=1 00:08:17.887 --rc genhtml_function_coverage=1 00:08:17.887 --rc genhtml_legend=1 00:08:17.887 --rc geninfo_all_blocks=1 00:08:17.887 --rc geninfo_unexecuted_blocks=1 00:08:17.887 00:08:17.887 ' 00:08:17.887 10:13:57 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:18.489 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:19.062 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.063 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.063 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.063 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.063 10:13:58 nvme -- nvme/nvme.sh@79 -- # uname 00:08:19.063 10:13:58 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:19.063 10:13:58 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:19.063 10:13:58 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:19.063 10:13:58 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:19.063 10:13:58 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:19.063 10:13:58 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:19.063 Waiting for stub to ready for secondary processes... 00:08:19.063 10:13:58 nvme -- common/autotest_common.sh@1075 -- # stubpid=74317 00:08:19.063 10:13:58 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:19.063 10:13:58 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:19.063 10:13:58 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74317 ]] 00:08:19.063 10:13:58 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:19.063 10:13:58 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:19.063 [2024-11-29 10:13:58.434403] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:19.063 [2024-11-29 10:13:58.434517] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:20.007 [2024-11-29 10:13:59.156869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:20.007 [2024-11-29 10:13:59.169546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.007 [2024-11-29 10:13:59.169790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.007 [2024-11-29 10:13:59.169885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.007 [2024-11-29 10:13:59.182930] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:20.007 [2024-11-29 10:13:59.182987] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:20.007 [2024-11-29 10:13:59.196406] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:20.007 [2024-11-29 10:13:59.196619] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:20.007 [2024-11-29 10:13:59.197510] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:20.007 [2024-11-29 10:13:59.197757] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:20.007 [2024-11-29 10:13:59.197850] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:20.007 [2024-11-29 10:13:59.198612] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:20.007 [2024-11-29 10:13:59.198816] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:20.007 [2024-11-29 10:13:59.198867] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:20.007 [2024-11-29 10:13:59.199847] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:20.007 [2024-11-29 10:13:59.200016] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:20.007 [2024-11-29 10:13:59.200076] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:20.007 [2024-11-29 10:13:59.200128] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:20.007 [2024-11-29 10:13:59.200188] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:20.007 done. 00:08:20.007 10:13:59 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:20.007 10:13:59 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:20.007 10:13:59 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:20.007 10:13:59 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:20.007 10:13:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.007 10:13:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.007 ************************************ 00:08:20.007 START TEST nvme_reset 00:08:20.008 ************************************ 00:08:20.008 10:13:59 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:20.269 Initializing NVMe Controllers 00:08:20.269 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:20.269 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:20.269 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:20.269 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:20.269 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:20.269 ************************************ 00:08:20.269 END TEST nvme_reset 00:08:20.270 ************************************ 00:08:20.270 00:08:20.270 real 0m0.187s 00:08:20.270 user 0m0.061s 00:08:20.270 sys 0m0.082s 00:08:20.270 10:13:59 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.270 10:13:59 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:20.270 10:13:59 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:20.270 10:13:59 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.270 10:13:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.270 10:13:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.270 ************************************ 00:08:20.270 START TEST nvme_identify 00:08:20.270 ************************************ 00:08:20.270 10:13:59 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:20.270 10:13:59 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:20.270 10:13:59 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:20.270 10:13:59 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:20.270 10:13:59 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:20.270 10:13:59 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:20.270 10:13:59 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:20.270 10:13:59 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:20.270 10:13:59 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:20.270 10:13:59 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:20.270 10:13:59 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:20.270 10:13:59 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:20.270 10:13:59 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:20.532 ===================================================== 00:08:20.532 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:20.532 ===================================================== 00:08:20.532 Controller Capabilities/Features 00:08:20.532 ================================ 00:08:20.532 Vendor ID: 1b36 00:08:20.532 Subsystem Vendor ID: 1af4 00:08:20.532 Serial Number: 12343 00:08:20.532 Model Number: QEMU NVMe Ctrl 00:08:20.532 Firmware Version: 8.0.0 00:08:20.532 Recommended Arb Burst: 6 00:08:20.532 IEEE OUI Identifier: 00 54 52 00:08:20.532 Multi-path I/O 00:08:20.532 May have multiple subsystem ports: No 00:08:20.532 May have multiple controllers: Yes 00:08:20.532 Associated with SR-IOV VF: No 00:08:20.532 Max Data Transfer Size: 524288 00:08:20.532 Max Number of Namespaces: 256 00:08:20.532 Max Number of I/O Queues: 64 00:08:20.532 NVMe Specification Version (VS): 1.4 00:08:20.532 NVMe Specification Version (Identify): 1.4 00:08:20.532 Maximum Queue Entries: 2048 00:08:20.532 Contiguous Queues Required: Yes 00:08:20.532 Arbitration Mechanisms Supported 00:08:20.532 Weighted Round Robin: Not Supported 00:08:20.532 Vendor Specific: Not Supported 00:08:20.532 Reset Timeout: 7500 ms 00:08:20.532 Doorbell Stride: 4 bytes 00:08:20.532 NVM Subsystem Reset: Not Supported 00:08:20.532 Command Sets Supported 00:08:20.532 NVM Command Set: Supported 00:08:20.532 Boot Partition: Not Supported 00:08:20.532 Memory Page Size Minimum: 4096 bytes 00:08:20.532 Memory Page Size Maximum: 65536 bytes 00:08:20.532 Persistent Memory Region: Not Supported 00:08:20.532 Optional Asynchronous Events Supported 00:08:20.532 Namespace Attribute Notices: Supported 00:08:20.532 Firmware Activation Notices: Not Supported 00:08:20.532 ANA Change Notices: Not Supported 00:08:20.532 PLE Aggregate Log Change Notices: Not Supported 00:08:20.532 LBA Status Info Alert Notices: Not Supported 00:08:20.532 EGE Aggregate Log Change Notices: Not Supported 00:08:20.532 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.532 Zone Descriptor Change Notices: Not Supported 00:08:20.532 Discovery Log Change Notices: Not Supported 00:08:20.532 Controller Attributes 00:08:20.532 128-bit Host Identifier: Not Supported 00:08:20.532 Non-Operational Permissive Mode: Not Supported 00:08:20.532 NVM Sets: Not Supported 00:08:20.532 Read Recovery Levels: Not Supported 00:08:20.532 Endurance Groups: Supported 00:08:20.532 Predictable Latency Mode: Not Supported 00:08:20.532 Traffic Based Keep ALive: Not Supported 00:08:20.532 Namespace Granularity: Not Supported 00:08:20.532 SQ Associations: Not Supported 00:08:20.532 UUID List: Not Supported 00:08:20.532 Multi-Domain Subsystem: Not Supported 00:08:20.532 Fixed Capacity Management: Not Supported 00:08:20.532 Variable Capacity Management: Not Supported 00:08:20.532 Delete Endurance Group: Not Supported 00:08:20.532 Delete NVM Set: Not Supported 00:08:20.532 Extended LBA Formats Supported: Supported 00:08:20.532 Flexible Data Placement Supported: Supported 00:08:20.532 00:08:20.532 Controller Memory Buffer Support 00:08:20.532 ================================ 00:08:20.532 Supported: No 00:08:20.532 00:08:20.532 Persistent Memory Region Support 00:08:20.532 ================================ 00:08:20.532 Supported: No 00:08:20.532 00:08:20.532 Admin Command Set Attributes 00:08:20.532 ============================ 00:08:20.532 Security Send/Receive: Not Supported 00:08:20.532 Format NVM: Supported 00:08:20.532 Firmware Activate/Download: Not Supported 00:08:20.532 Namespace Management: Supported 00:08:20.532 Device Self-Test: Not Supported 00:08:20.532 Directives: Supported 00:08:20.532 NVMe-MI: Not Supported 00:08:20.532 Virtualization Management: Not Supported 00:08:20.532 Doorbell Buffer Config: Supported 00:08:20.532 Get LBA Status Capability: Not Supported 00:08:20.532 Command & Feature Lockdown Capability: Not Supported 00:08:20.532 Abort Command Limit: 4 00:08:20.532 Async Event Request Limit: 4 00:08:20.532 Number of Firmware Slots: N/A 00:08:20.532 Firmware Slot 1 Read-Only: N/A 00:08:20.532 Firmware Activation Without Reset: N/A 00:08:20.532 Multiple Update Detection Support: N/A 00:08:20.532 Firmware Update Granularity: No Information Provided 00:08:20.532 Per-Namespace SMART Log: Yes 00:08:20.532 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.532 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:20.532 Command Effects Log Page: Supported 00:08:20.532 Get Log Page Extended Data: Supported 00:08:20.532 Telemetry Log Pages: Not Supported 00:08:20.532 Persistent Event Log Pages: Not Supported 00:08:20.532 Supported Log Pages Log Page: May Support 00:08:20.532 Commands Supported & Effects Log Page: Not Supported 00:08:20.532 Feature Identifiers & Effects Log Page:May Support 00:08:20.532 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.532 Data Area 4 for Telemetry Log: Not Supported 00:08:20.532 Error Log Page Entries Supported: 1 00:08:20.532 Keep Alive: Not Supported 00:08:20.532 00:08:20.532 NVM Command Set Attributes 00:08:20.532 ========================== 00:08:20.532 Submission Queue Entry Size 00:08:20.532 Max: 64 00:08:20.532 Min: 64 00:08:20.532 Completion Queue Entry Size 00:08:20.532 Max: 16 00:08:20.532 Min: 16 00:08:20.532 Number of Namespaces: 256 00:08:20.532 Compare Command: Supported 00:08:20.532 Write Uncorrectable Command: Not Supported 00:08:20.532 Dataset Management Command: Supported 00:08:20.532 Write Zeroes Command: Supported 00:08:20.532 Set Features Save Field: Supported 00:08:20.532 Reservations: Not Supported 00:08:20.532 Timestamp: Supported 00:08:20.532 Copy: Supported 00:08:20.532 Volatile Write Cache: Present 00:08:20.532 Atomic Write Unit (Normal): 1 00:08:20.532 Atomic Write Unit (PFail): 1 00:08:20.532 Atomic Compare & Write Unit: 1 00:08:20.532 Fused Compare & Write: Not Supported 00:08:20.533 Scatter-Gather List 00:08:20.533 SGL Command Set: Supported 00:08:20.533 SGL Keyed: Not Supported 00:08:20.533 SGL Bit Bucket Descriptor: Not Supported 00:08:20.533 SGL Metadata Pointer: Not Supported 00:08:20.533 Oversized SGL: Not Supported 00:08:20.533 SGL Metadata Address: Not Supported 00:08:20.533 SGL Offset: Not Supported 00:08:20.533 Transport SGL Data Block: Not Supported 00:08:20.533 Replay Protected Memory Block: Not Supported 00:08:20.533 00:08:20.533 Firmware Slot Information 00:08:20.533 ========================= 00:08:20.533 Active slot: 1 00:08:20.533 Slot 1 Firmware Revision: 1.0 00:08:20.533 00:08:20.533 00:08:20.533 Commands Supported and Effects 00:08:20.533 ============================== 00:08:20.533 Admin Commands 00:08:20.533 -------------- 00:08:20.533 Delete I/O Submission Queue (00h): Supported 00:08:20.533 Create I/O Submission Queue (01h): Supported 00:08:20.533 Get Log Page (02h): Supported 00:08:20.533 Delete I/O Completion Queue (04h): Supported 00:08:20.533 Create I/O Completion Queue (05h): Supported 00:08:20.533 Identify (06h): Supported 00:08:20.533 Abort (08h): Supported 00:08:20.533 Set Features (09h): Supported 00:08:20.533 Get Features (0Ah): Supported 00:08:20.533 Asynchronous Event Request (0Ch): Supported 00:08:20.533 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.533 Directive Send (19h): Supported 00:08:20.533 Directive Receive (1Ah): Supported 00:08:20.533 Virtualization Management (1Ch): Supported 00:08:20.533 Doorbell Buffer Config (7Ch): Supported 00:08:20.533 Format NVM (80h): Supported LBA-Change 00:08:20.533 I/O Commands 00:08:20.533 ------------ 00:08:20.533 Flush (00h): Supported LBA-Change 00:08:20.533 Write (01h): Supported LBA-Change 00:08:20.533 Read (02h): Supported 00:08:20.533 Compare (05h): Supported 00:08:20.533 Write Zeroes (08h): Supported LBA-Change 00:08:20.533 Dataset Management (09h): Supported LBA-Change 00:08:20.533 Unknown (0Ch): Supported 00:08:20.533 Unknown (12h): Supported 00:08:20.533 Copy (19h): Supported LBA-Change 00:08:20.533 Unknown (1Dh): Supported LBA-Change 00:08:20.533 00:08:20.533 Error Log 00:08:20.533 ========= 00:08:20.533 00:08:20.533 Arbitration 00:08:20.533 =========== 00:08:20.533 Arbitration Burst: no limit 00:08:20.533 00:08:20.533 Power Management 00:08:20.533 ================ 00:08:20.533 Number of Power States: 1 00:08:20.533 Current Power State: Power State #0 00:08:20.533 Power State #0: 00:08:20.533 Max Power: 25.00 W 00:08:20.533 Non-Operational State: Operational 00:08:20.533 Entry Latency: 16 microseconds 00:08:20.533 Exit Latency: 4 microseconds 00:08:20.533 Relative Read Throughput: 0 00:08:20.533 Relative Read Latency: 0 00:08:20.533 Relative Write Throughput: 0 00:08:20.533 Relative Write Latency: 0 00:08:20.533 Idle Power: Not Reported 00:08:20.533 Active Power: Not Reported 00:08:20.533 Non-Operational Permissive Mode: Not Supported 00:08:20.533 00:08:20.533 Health Information 00:08:20.533 ================== 00:08:20.533 Critical Warnings: 00:08:20.533 Available Spare Space: OK 00:08:20.533 Temperature: OK 00:08:20.533 Device Reliability: OK 00:08:20.533 Read Only: No 00:08:20.533 Volatile Memory Backup: OK 00:08:20.533 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.533 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.533 Available Spare: 0% 00:08:20.533 Available Spare Threshold: 0% 00:08:20.533 Life Percentage Used: 0% 00:08:20.533 Data Units Read: 945 00:08:20.533 Data Units Written: 875 00:08:20.533 Host Read Commands: 39684 00:08:20.533 Host Write Commands: 39107 00:08:20.533 Controller Busy Time: 0 minutes 00:08:20.533 Power Cycles: 0 00:08:20.533 Power On Hours: 0 hours 00:08:20.533 Unsafe Shutdowns: 0 00:08:20.533 Unrecoverable Media Errors: 0 00:08:20.533 Lifetime Error Log Entries: 0 00:08:20.533 Warning Temperature Time: 0 minutes 00:08:20.533 Critical Temperature Time: 0 minutes 00:08:20.533 00:08:20.533 Number of Queues 00:08:20.533 ================ 00:08:20.533 Number of I/O Submission Queues: 64 00:08:20.533 Number of I/O Completion Queues: 64 00:08:20.533 00:08:20.533 ZNS Specific Controller Data 00:08:20.533 ============================ 00:08:20.533 Zone Append Size Limit: 0 00:08:20.533 00:08:20.533 00:08:20.533 Active Namespaces 00:08:20.533 ================= 00:08:20.533 Namespace ID:1 00:08:20.533 Error Recovery Timeout: Unlimited 00:08:20.533 Command Set Identifier: NVM (00h) 00:08:20.533 Deallocate: Supported 00:08:20.533 Deallocated/Unwritten Error: Supported 00:08:20.533 Deallocated Read Value: All 0x00 00:08:20.533 Deallocate in Write Zeroes: Not Supported 00:08:20.533 Deallocated Guard Field: 0xFFFF 00:08:20.533 Flush: Supported 00:08:20.533 Reservation: Not Supported 00:08:20.533 Namespace Sharing Capabilities: Multiple Controllers 00:08:20.533 Size (in LBAs): 262144 (1GiB) 00:08:20.533 Capacity (in LBAs): 262144 (1GiB) 00:08:20.533 Utilization (in LBAs): 262144 (1GiB) 00:08:20.533 Thin Provisioning: Not Supported 00:08:20.533 Per-NS Atomic Units: No 00:08:20.533 Maximum Single Source Range Length: 128 00:08:20.533 Maximum Copy Length: 128 00:08:20.533 Maximum Source Range Count: 128 00:08:20.533 NGUID/EUI64 Never Reused: No 00:08:20.533 Namespace Write Protected: No 00:08:20.533 Endurance group ID: 1 00:08:20.533 Number of LBA Formats: 8 00:08:20.533 Current LBA Format: LBA Format #04 00:08:20.533 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.533 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.533 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.533 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.533 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.533 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.533 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.533 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.533 00:08:20.533 Get Feature FDP: 00:08:20.533 ================ 00:08:20.533 Enabled: Yes 00:08:20.533 FDP configuration index: 0 00:08:20.533 00:08:20.533 FDP configurations log page 00:08:20.533 =========================== 00:08:20.533 Number of FDP configurations: 1 00:08:20.533 Version: 0 00:08:20.533 Size: 112 00:08:20.533 FDP Configuration Descriptor: 0 00:08:20.533 Descriptor Size: 96 00:08:20.533 Reclaim Group Identifier format: 2 00:08:20.533 FDP Volatile Write Cache: Not Present 00:08:20.533 FDP Configuration: Valid 00:08:20.533 Vendor Specific Size: 0 00:08:20.533 Number of Reclaim Groups: 2 00:08:20.533 Number of Recalim Unit Handles: 8 00:08:20.533 Max Placement Identifiers: 128 00:08:20.533 Number of Namespaces Suppprted: 256 00:08:20.533 Reclaim unit Nominal Size: 6000000 bytes 00:08:20.533 Estimated Reclaim Unit Time Limit: Not Reported 00:08:20.533 RUH Desc #000: RUH Type: Initially Isolated 00:08:20.533 RUH Desc #001: RUH Type: Initially Isolated 00:08:20.533 RUH Desc #002: RUH Type: Initially Isolated 00:08:20.533 RUH Desc #003: RUH Type: Initially Isolated 00:08:20.533 RUH Desc #004: RUH Type: Initially Isolated 00:08:20.533 RUH Desc #005: RUH Type: Initially Isolated 00:08:20.533 RUH Desc #006: RUH Type: Initially Isolated 00:08:20.533 RUH Desc #007: RUH Type: Initially Isolated 00:08:20.533 00:08:20.533 FDP reclaim unit handle usage log page 00:08:20.533 ====================================== 00:08:20.533 Number of Reclaim Unit Handles: 8 00:08:20.533 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:20.533 RUH Usage Desc #001: RUH Attributes: Unused 00:08:20.533 RUH Usage Desc #002: RUH Attributes: Unused 00:08:20.533 RUH Usage Desc #003: RUH Attributes: Unused 00:08:20.533 RUH Usage Desc #004: RUH Attributes: Unused 00:08:20.533 RUH Usage Desc #005: RUH Attributes: Unused 00:08:20.533 RUH Usage Desc #006: RUH Attributes: Unused 00:08:20.533 RUH Usage Desc #007: RUH Attributes: Unused 00:08:20.533 00:08:20.533 FDP statistics log page 00:08:20.533 ======================= 00:08:20.533 Host bytes with metadata written: 556048384 00:08:20.533 Media bytes with metadata written: 556126208 00:08:20.533 Media bytes erased: 0 00:08:20.533 00:08:20.533 FDP events log page 00:08:20.533 =================== 00:08:20.533 Number of FDP events: 0 00:08:20.533 00:08:20.533 NVM Specific Namespace Data 00:08:20.533 =========================== 00:08:20.533 Logical Block Storage Tag Mask: 0 00:08:20.533 Protection Information Capabilities: 00:08:20.533 16b Guard Protection Information Storage Tag Support: No 00:08:20.533 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.533 Storage Tag Check Read Support: No 00:08:20.533 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.533 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.534 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.534 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.534 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.534 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.534 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.534 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.534 ===================================================== 00:08:20.534 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:20.534 ===================================================== 00:08:20.534 Controller Capabilities/Features 00:08:20.534 ================================ 00:08:20.534 Vendor ID: 1b36 00:08:20.534 Subsystem Vendor ID: 1af4 00:08:20.534 Serial Number: 12340 00:08:20.534 Model Number: QEMU NVMe Ctrl 00:08:20.534 Firmware Version: 8.0.0 00:08:20.534 Recommended Arb Burst: 6 00:08:20.534 IEEE OUI Identifier: 00 54 52 00:08:20.534 Multi-path I/O 00:08:20.534 May have multiple subsystem ports: No 00:08:20.534 May have multiple controllers: No 00:08:20.534 Associated with SR-IOV VF: No 00:08:20.534 Max Data Transfer Size: 524288 00:08:20.534 Max Number of Namespaces: 256 00:08:20.534 Max Number of I/O Queues: 64 00:08:20.534 NVMe Specification Version (VS): 1.4 00:08:20.534 NVMe Specification Version (Identify): 1.4 00:08:20.534 Maximum Queue Entries: 2048 00:08:20.534 Contiguous Queues Required: Yes 00:08:20.534 Arbitration Mechanisms Supported 00:08:20.534 Weighted Round Robin: Not Supported 00:08:20.534 Vendor Specific: Not Supported 00:08:20.534 Reset Timeout: 7500 ms 00:08:20.534 Doorbell Stride: 4 bytes 00:08:20.534 NVM Subsystem Reset: Not Supported 00:08:20.534 Command Sets Supported 00:08:20.534 NVM Command Set: Supported 00:08:20.534 Boot Partition: Not Supported 00:08:20.534 Memory Page Size Minimum: 4096 bytes 00:08:20.534 Memory Page Size Maximum: 65536 bytes 00:08:20.534 Persistent Memory Region: Not Supported 00:08:20.534 Optional Asynchronous Events Supported 00:08:20.534 Namespace Attribute Notices: Supported 00:08:20.534 Firmware Activation Notices: Not Supported 00:08:20.534 ANA Change Notices: Not Supported 00:08:20.534 PLE Aggregate Log Change Notices: Not Supported 00:08:20.534 LBA Status Info Alert Notices: Not Supported 00:08:20.534 EGE Aggregate Log Change Notices: Not Supported 00:08:20.534 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.534 Zone Descriptor Change Notices: Not Supported 00:08:20.534 Discovery Log Change Notices: Not Supported 00:08:20.534 Controller Attributes 00:08:20.534 128-bit Host Identifier: Not Supported 00:08:20.534 Non-Operational Permissive Mode: Not Supported 00:08:20.534 NVM Sets: Not Supported 00:08:20.534 Read Recovery Levels: Not Supported 00:08:20.534 Endurance Groups: Not Supported 00:08:20.534 Predictable Latency Mode: Not Supported 00:08:20.534 Traffic Based Keep ALive: Not Supported 00:08:20.534 Namespace Granularity: Not Supported 00:08:20.534 SQ Associations: Not Supported 00:08:20.534 UUID List: Not Supported 00:08:20.534 Multi-Domain Subsystem: Not Supported 00:08:20.534 Fixed Capacity Management: Not Supported 00:08:20.534 Variable Capacity Management: Not Supported 00:08:20.534 Delete Endurance Group: Not Supported 00:08:20.534 Delete NVM Set: Not Supported 00:08:20.534 Extended LBA Formats Supported: Supported 00:08:20.534 Flexible Data Placement Supported: Not Supported 00:08:20.534 00:08:20.534 Controller Memory Buffer Support 00:08:20.534 ================================ 00:08:20.534 Supported: No 00:08:20.534 00:08:20.534 Persistent Memory Region Support 00:08:20.534 ================================ 00:08:20.534 Supported: No 00:08:20.534 00:08:20.534 Admin Command Set Attributes 00:08:20.534 ============================ 00:08:20.534 Security Send/Receive: Not Supported 00:08:20.534 Format NVM: Supported 00:08:20.534 Firmware Activate/Download: Not Supported 00:08:20.534 Namespace Management: Supported 00:08:20.534 Device Self-Test: Not Supported 00:08:20.534 Directives: Supported 00:08:20.534 NVMe-MI: Not Supported 00:08:20.534 Virtualization Management: Not Supported 00:08:20.534 Doorbell Buffer Config: Supported 00:08:20.534 Get LBA Status Capability: Not Supported 00:08:20.534 Command & Feature Lockdown Capability: Not Supported 00:08:20.534 Abort Command Limit: 4 00:08:20.534 Async Event Request Limit: 4 00:08:20.534 Number of Firmware Slots: N/A 00:08:20.534 Firmware Slot 1 Read-Only: N/A 00:08:20.534 Firmware Activation Without Reset: N/A 00:08:20.534 Multiple Update Detection Support: N/A 00:08:20.534 Firmware Update Granularity: No Information Provided 00:08:20.534 Per-Namespace SMART Log: Yes 00:08:20.534 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.534 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:20.534 Command Effects Log Page: Supported 00:08:20.534 Get Log Page Extended Data: Supported 00:08:20.534 Telemetry Log Pages: Not Supported 00:08:20.534 Persistent Event Log Pages: Not Supported 00:08:20.534 Supported Log Pages Log Page: May Support 00:08:20.534 Commands Supported & Effects Log Page: Not Supported 00:08:20.534 Feature Identifiers & Effects Log Page:May Support 00:08:20.534 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.534 Data Area 4 for Telemetry Log: Not Supported 00:08:20.534 Error Log Page Entries Supported: 1 00:08:20.534 Keep Alive: Not Supported 00:08:20.534 00:08:20.534 NVM Command Set Attributes 00:08:20.534 ========================== 00:08:20.534 Submission Queue Entry Size 00:08:20.534 Max: 64 00:08:20.534 Min: 64 00:08:20.534 Completion Queue Entry Size 00:08:20.534 Max: 16 00:08:20.534 Min: 16 00:08:20.534 Number of Namespaces: 256 00:08:20.534 Compare Command: Supported 00:08:20.534 Write Uncorrectable Command: Not Supported 00:08:20.534 Dataset Management Command: Supported 00:08:20.534 Write Zeroes Command: Supported 00:08:20.534 Set Features Save Field: Supported 00:08:20.534 Reservations: Not Supported 00:08:20.534 Timestamp: Supported 00:08:20.534 Copy: Supported 00:08:20.534 Volatile Write Cache: Present 00:08:20.534 Atomic Write Unit (Normal): 1 00:08:20.534 Atomic Write Unit (PFail): 1 00:08:20.534 Atomic Compare & Write Unit: 1 00:08:20.534 Fused Compare & Write: Not Supported 00:08:20.534 Scatter-Gather List 00:08:20.534 SGL Command Set: Supported 00:08:20.534 SGL Keyed: Not Supported 00:08:20.534 SGL Bit Bucket Descriptor: Not Supported 00:08:20.534 SGL Metadata Pointer: Not Supported 00:08:20.534 Oversized SGL: Not Supported 00:08:20.534 SGL Metadata Address: Not Supported 00:08:20.534 SGL Offset: Not Supported 00:08:20.534 Transport SGL Data Block: Not Supported 00:08:20.534 Replay Protected Memory Block: Not Supported 00:08:20.534 00:08:20.534 Firmware Slot Information 00:08:20.534 ========================= 00:08:20.534 Active slot: 1 00:08:20.534 Slot 1 Firmware Revision: 1.0 00:08:20.534 00:08:20.534 00:08:20.534 Commands Supported and Effects 00:08:20.534 ============================== 00:08:20.534 Admin Commands 00:08:20.534 -------------- 00:08:20.534 Delete I/O Submission Queue (00h): Supported 00:08:20.534 Create I/O Submission Queue (01h): Supported 00:08:20.534 Get Log Page (02h): Supported 00:08:20.534 Delete I/O Completion Queue (04h): Supported 00:08:20.534 Create I/O Completion Queue (05h): Supported 00:08:20.534 Identify (06h): Supported 00:08:20.534 Abort (08h): Supported 00:08:20.534 Set Features (09h): Supported 00:08:20.534 Get Features (0Ah): Supported 00:08:20.534 Asynchronous Event Request (0Ch): Supported 00:08:20.534 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.534 Directive Send (19h): Supported 00:08:20.534 Directive Receive (1Ah): Supported 00:08:20.534 Virtualization Management (1Ch): Supported 00:08:20.534 Doorbell Buffer Config (7Ch): Supported 00:08:20.534 Format NVM (80h): Supported LBA-Change 00:08:20.534 I/O Commands 00:08:20.534 ------------ 00:08:20.534 Flush (00h): Supported LBA-Change 00:08:20.534 Write (01h): Supported LBA-Change 00:08:20.534 Read (02h): Supported 00:08:20.534 Compare (05h): Supported 00:08:20.534 Write Zeroes (08h): Supported LBA-Change 00:08:20.534 Dataset Management (09h): Supported LBA-Change 00:08:20.534 Unknown (0Ch): Supported 00:08:20.534 Unknown (12h): Supported 00:08:20.534 Copy (19h): Supported LBA-Change 00:08:20.534 Unknown (1Dh): Supported LBA-Change 00:08:20.534 00:08:20.534 Error Log 00:08:20.534 ========= 00:08:20.534 00:08:20.534 Arbitration 00:08:20.534 =========== 00:08:20.534 Arbitration Burst: no limit 00:08:20.534 00:08:20.534 Power Management 00:08:20.534 ================ 00:08:20.534 Number of Power States: 1 00:08:20.534 Current Power State: Power State #0 00:08:20.534 Power State #0: 00:08:20.535 Max Power: 25.00 W 00:08:20.535 Non-Operational State: Operational 00:08:20.535 Entry Latency: 16 microseconds 00:08:20.535 Exit Latency: 4 microseconds 00:08:20.535 Relative Read Throughput: 0 00:08:20.535 Relative Read Latency: 0 00:08:20.535 Relative Write Throughput: 0 00:08:20.535 Relative Write Latency: 0 00:08:20.535 Idle Power: Not Reported 00:08:20.535 Active Power: Not Reported 00:08:20.535 Non-Operational Permissive Mode: Not Supported 00:08:20.535 00:08:20.535 Health Information 00:08:20.535 ================== 00:08:20.535 Critical Warnings: 00:08:20.535 Available Spare Space: OK 00:08:20.535 Temperature: OK 00:08:20.535 Device Reliability: OK 00:08:20.535 Read Only: No 00:08:20.535 Volatile Memory Backup: OK 00:08:20.535 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.535 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.535 Available Spare: 0% 00:08:20.535 Available Spare Threshold: 0% 00:08:20.535 Life Percentage Used: 0% 00:08:20.535 Data Units Read: 673 00:08:20.535 Data Units Written: 601 00:08:20.535 Host Read Commands: 37196 00:08:20.535 Host Write Commands: 36982 00:08:20.535 Controller Busy Time: 0 minutes 00:08:20.535 Power Cycles: 0 00:08:20.535 Power On Hours: 0 hours 00:08:20.535 Unsafe Shutdowns: 0 00:08:20.535 Unrecoverable Media Errors: 0 00:08:20.535 Lifetime Error Log Entries: 0 00:08:20.535 Warning Temperature Time: 0 minutes 00:08:20.535 Critical Temperature Time: 0 minutes 00:08:20.535 00:08:20.535 Number of Queues 00:08:20.535 ================ 00:08:20.535 Number of I/O Submission Queues: 64 00:08:20.535 Number of I/O Completion Queues: 64 00:08:20.535 00:08:20.535 ZNS Specific Controller Data 00:08:20.535 ============================ 00:08:20.535 Zone Append Size Limit: 0 00:08:20.535 00:08:20.535 00:08:20.535 Active Namespaces 00:08:20.535 ================= 00:08:20.535 Namespace ID:1 00:08:20.535 Error Recovery Timeout: Unlimited 00:08:20.535 Command Set Identifier: NVM (00h) 00:08:20.535 Deallocate: Supported 00:08:20.535 Deallocated/Unwritten Error: Supported 00:08:20.535 Deallocated Read Value: All 0x00 00:08:20.535 Deallocate in Write Zeroes: Not Supported 00:08:20.535 Deallocated Guard Field: 0xFFFF 00:08:20.535 Flush: Supported 00:08:20.535 Reservation: Not Supported 00:08:20.535 Metadata Transferred as: Separate Metadata Buffer 00:08:20.535 Namespace Sharing Capabilities: Private 00:08:20.535 Size (in LBAs): 1548666 (5GiB) 00:08:20.535 Capacity (in LBAs): 1548666 (5GiB) 00:08:20.535 Utilization (in LBAs): 1548666 (5GiB) 00:08:20.535 Thin Provisioning: Not Supported 00:08:20.535 Per-NS Atomic Units: No 00:08:20.535 Maximum Single Source Range Length: 128 00:08:20.535 Maximum Copy Length: 128 00:08:20.535 Maximum Source Range Count: 128 00:08:20.535 NGUID/EUI64 Never Reused: No 00:08:20.535 Namespace Write Protected: No 00:08:20.535 Number of LBA Formats: 8 00:08:20.535 Current LBA Format: LBA Format #07 00:08:20.535 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.535 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.535 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.535 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.535 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.535 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.535 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.535 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.535 00:08:20.535 NVM Specific Namespace Data 00:08:20.535 =========================== 00:08:20.535 Logical Block Storage Tag Mask: 0 00:08:20.535 Protection Information Capabilities: 00:08:20.535 16b Guard Protection Information Storage Tag Support: No 00:08:20.535 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.535 Storage Tag Check Read Support: No 00:08:20.535 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.535 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.535 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.535 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.535 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.535 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.535 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.535 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.535 ===================================================== 00:08:20.535 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:20.535 ===================================================== 00:08:20.535 Controller Capabilities/Features 00:08:20.535 ================================ 00:08:20.535 Vendor ID: 1b36 00:08:20.535 Subsystem Vendor ID: 1af4 00:08:20.535 Serial Number: 12341 00:08:20.535 Model Number: QEMU NVMe Ctrl 00:08:20.535 Firmware Version: 8.0.0 00:08:20.535 Recommended Arb Burst: 6 00:08:20.535 IEEE OUI Identifier: 00 54 52 00:08:20.535 Multi-path I/O 00:08:20.535 May have multiple subsystem ports: No 00:08:20.535 May have multiple controllers: No 00:08:20.535 Associated with SR-IOV VF: No 00:08:20.535 Max Data Transfer Size: 524288 00:08:20.535 Max Number of Namespaces: 256 00:08:20.535 Max Number of I/O Queues: 64 00:08:20.535 NVMe Specification Version (VS): 1.4 00:08:20.535 NVMe Specification Version (Identify): 1.4 00:08:20.535 Maximum Queue Entries: 2048 00:08:20.535 Contiguous Queues Required: Yes 00:08:20.535 Arbitration Mechanisms Supported 00:08:20.535 Weighted Round Robin: Not Supported 00:08:20.535 Vendor Specific: Not Supported 00:08:20.535 Reset Timeout: 7500 ms 00:08:20.535 Doorbell Stride: 4 bytes 00:08:20.535 NVM Subsystem Reset: Not Supported 00:08:20.535 Command Sets Supported 00:08:20.535 NVM Command Set: Supported 00:08:20.535 Boot Partition: Not Supported 00:08:20.535 Memory Page Size Minimum: 4096 bytes 00:08:20.535 Memory Page Size Maximum: 65536 bytes 00:08:20.535 Persistent Memory Region: Not Supported 00:08:20.535 Optional Asynchronous Events Supported 00:08:20.535 Namespace Attribute Notices: Supported 00:08:20.535 Firmware Activation Notices: Not Supported 00:08:20.535 ANA Change Notices: Not Supported 00:08:20.535 PLE Aggregate Log Change Notices: Not Supported 00:08:20.535 LBA Status Info Alert Notices: Not Supported 00:08:20.535 EGE Aggregate Log Change Notices: Not Supported 00:08:20.535 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.535 Zone Descriptor Change Notices: Not Supported 00:08:20.535 Discovery Log Change Notices: Not Supported 00:08:20.535 Controller Attributes 00:08:20.535 128-bit Host Identifier: Not Supported 00:08:20.535 Non-Operational Permissive Mode: Not Supported 00:08:20.535 NVM Sets: Not Supported 00:08:20.535 Read Recovery Levels: Not Supported 00:08:20.535 Endurance Groups: Not Supported 00:08:20.535 Predictable Latency Mode: Not Supported 00:08:20.535 Traffic Based Keep ALive: Not Supported 00:08:20.535 Namespace Granularity: Not Supported 00:08:20.535 SQ Associations: Not Supported 00:08:20.535 UUID List: Not Supported 00:08:20.535 Multi-Domain Subsystem: Not Supported 00:08:20.535 Fixed Capacity Management: Not Supported 00:08:20.535 Variable Capacity Management: Not Supported 00:08:20.535 Delete Endurance Group: Not Supported 00:08:20.535 Delete NVM Set: Not Supported 00:08:20.535 Extended LBA Formats Supported: Supported 00:08:20.535 Flexible Data Placement Supported: Not Supported 00:08:20.535 00:08:20.535 Controller Memory Buffer Support 00:08:20.535 ================================ 00:08:20.535 Supported: No 00:08:20.535 00:08:20.535 Persistent Memory Region Support 00:08:20.535 ================================ 00:08:20.535 Supported: No 00:08:20.535 00:08:20.535 Admin Command Set Attributes 00:08:20.535 ============================ 00:08:20.535 Security Send/Receive: Not Supported 00:08:20.535 Format NVM: Supported 00:08:20.535 Firmware Activate/Download: Not Supported 00:08:20.535 Namespace Management: Supported 00:08:20.535 Device Self-Test: Not Supported 00:08:20.535 Directives: Supported 00:08:20.535 NVMe-MI: Not Supported 00:08:20.535 Virtualization Management: Not Supported 00:08:20.535 Doorbell Buffer Config: Supported 00:08:20.535 Get LBA Status Capability: Not Supported 00:08:20.535 Command & Feature Lockdown Capability: Not Supported 00:08:20.535 Abort Command Limit: 4 00:08:20.535 Async Event Request Limit: 4 00:08:20.535 Number of Firmware Slots: N/A 00:08:20.535 Firmware Slot 1 Read-Only: N/A 00:08:20.535 Firmware Activation Without Reset: N/A 00:08:20.535 Multiple Update Detection Support: N/A 00:08:20.536 Firmware Update Granularity: No Information Provided 00:08:20.536 Per-Namespace SMART Log: Yes 00:08:20.536 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.536 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:20.536 Command Effects Log Page: Supported 00:08:20.536 Get Log Page Extended Data: Supported 00:08:20.536 Telemetry Log Pages: Not Supported 00:08:20.536 Persistent Event Log Pages: Not Supported 00:08:20.536 Supported Log Pages Log Page: May Support 00:08:20.536 Commands Supported & Effects Log Page: Not Supported 00:08:20.536 Feature Identifiers & Effects Log Page:May Support 00:08:20.536 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.536 Data Area 4 for Telemetry Log: Not Supported 00:08:20.536 Error Log Page Entries Supported: 1 00:08:20.536 Keep Alive: Not Supported 00:08:20.536 00:08:20.536 NVM Command Set Attributes 00:08:20.536 ========================== 00:08:20.536 Submission Queue Entry Size 00:08:20.536 Max: 64 00:08:20.536 Min: 64 00:08:20.536 Completion Queue Entry Size 00:08:20.536 Max: 16 00:08:20.536 Min: 16 00:08:20.536 Number of Namespaces: 256 00:08:20.536 Compare Command: Supported 00:08:20.536 Write Uncorrectable Command: Not Supported 00:08:20.536 Dataset Management Command: Supported 00:08:20.536 Write Zeroes Command: Supported 00:08:20.536 Set Features Save Field: Supported 00:08:20.536 Reservations: Not Supported 00:08:20.536 Timestamp: Supported 00:08:20.536 Copy: Supported 00:08:20.536 Volatile Write Cache: Present 00:08:20.536 Atomic Write Unit (Normal): 1 00:08:20.536 Atomic Write Unit (PFail): 1 00:08:20.536 Atomic Compare & Write Unit: 1 00:08:20.536 Fused Compare & Write: Not Supported 00:08:20.536 Scatter-Gather List 00:08:20.536 SGL Command Set: Supported 00:08:20.536 SGL Keyed: Not Supported 00:08:20.536 SGL Bit Bucket Descriptor: Not Supported 00:08:20.536 SGL Metadata Pointer: Not Supported 00:08:20.536 Oversized SGL: Not Supported 00:08:20.536 SGL Metadata Address: Not Supported 00:08:20.536 SGL Offset: Not Supported 00:08:20.536 Transport SGL Data Block: Not Supported 00:08:20.536 Replay Protected Memory Block: Not Supported 00:08:20.536 00:08:20.536 Firmware Slot Information 00:08:20.536 ========================= 00:08:20.536 Active slot: 1 00:08:20.536 Slot 1 Firmware Revision: 1.0 00:08:20.536 00:08:20.536 00:08:20.536 Commands Supported and Effects 00:08:20.536 ============================== 00:08:20.536 Admin Commands 00:08:20.536 -------------- 00:08:20.536 Delete I/O Submission Queue (00h): Supported 00:08:20.536 Create I/O Submission Queue (01h): Supported 00:08:20.536 Get Log Page (02h): Supported 00:08:20.536 Delete I/O Completion Queue (04h): Supported 00:08:20.536 Create I/O Completion Queue (05h): Supported 00:08:20.536 Identify (06h): Supported 00:08:20.536 Abort (08h): Supported 00:08:20.536 Set Features (09h): Supported 00:08:20.536 Get Featu[2024-11-29 10:13:59.874389] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74339 terminated unexpected 00:08:20.536 [2024-11-29 10:13:59.875815] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74339 terminated unexpected 00:08:20.536 [2024-11-29 10:13:59.876200] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74339 terminated unexpected 00:08:20.536 [2024-11-29 10:13:59.876569] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74339 terminated unexpected 00:08:20.536 res (0Ah): Supported 00:08:20.536 Asynchronous Event Request (0Ch): Supported 00:08:20.536 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.536 Directive Send (19h): Supported 00:08:20.536 Directive Receive (1Ah): Supported 00:08:20.536 Virtualization Management (1Ch): Supported 00:08:20.536 Doorbell Buffer Config (7Ch): Supported 00:08:20.536 Format NVM (80h): Supported LBA-Change 00:08:20.536 I/O Commands 00:08:20.536 ------------ 00:08:20.536 Flush (00h): Supported LBA-Change 00:08:20.536 Write (01h): Supported LBA-Change 00:08:20.536 Read (02h): Supported 00:08:20.536 Compare (05h): Supported 00:08:20.536 Write Zeroes (08h): Supported LBA-Change 00:08:20.536 Dataset Management (09h): Supported LBA-Change 00:08:20.536 Unknown (0Ch): Supported 00:08:20.536 Unknown (12h): Supported 00:08:20.536 Copy (19h): Supported LBA-Change 00:08:20.536 Unknown (1Dh): Supported LBA-Change 00:08:20.536 00:08:20.536 Error Log 00:08:20.536 ========= 00:08:20.536 00:08:20.536 Arbitration 00:08:20.536 =========== 00:08:20.536 Arbitration Burst: no limit 00:08:20.536 00:08:20.536 Power Management 00:08:20.536 ================ 00:08:20.536 Number of Power States: 1 00:08:20.536 Current Power State: Power State #0 00:08:20.536 Power State #0: 00:08:20.536 Max Power: 25.00 W 00:08:20.536 Non-Operational State: Operational 00:08:20.536 Entry Latency: 16 microseconds 00:08:20.536 Exit Latency: 4 microseconds 00:08:20.536 Relative Read Throughput: 0 00:08:20.536 Relative Read Latency: 0 00:08:20.536 Relative Write Throughput: 0 00:08:20.536 Relative Write Latency: 0 00:08:20.536 Idle Power: Not Reported 00:08:20.536 Active Power: Not Reported 00:08:20.536 Non-Operational Permissive Mode: Not Supported 00:08:20.536 00:08:20.536 Health Information 00:08:20.536 ================== 00:08:20.536 Critical Warnings: 00:08:20.536 Available Spare Space: OK 00:08:20.536 Temperature: OK 00:08:20.536 Device Reliability: OK 00:08:20.536 Read Only: No 00:08:20.536 Volatile Memory Backup: OK 00:08:20.536 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.536 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.536 Available Spare: 0% 00:08:20.536 Available Spare Threshold: 0% 00:08:20.536 Life Percentage Used: 0% 00:08:20.536 Data Units Read: 1068 00:08:20.536 Data Units Written: 934 00:08:20.536 Host Read Commands: 55565 00:08:20.536 Host Write Commands: 54349 00:08:20.536 Controller Busy Time: 0 minutes 00:08:20.536 Power Cycles: 0 00:08:20.536 Power On Hours: 0 hours 00:08:20.536 Unsafe Shutdowns: 0 00:08:20.536 Unrecoverable Media Errors: 0 00:08:20.536 Lifetime Error Log Entries: 0 00:08:20.536 Warning Temperature Time: 0 minutes 00:08:20.536 Critical Temperature Time: 0 minutes 00:08:20.536 00:08:20.536 Number of Queues 00:08:20.536 ================ 00:08:20.536 Number of I/O Submission Queues: 64 00:08:20.536 Number of I/O Completion Queues: 64 00:08:20.536 00:08:20.536 ZNS Specific Controller Data 00:08:20.536 ============================ 00:08:20.536 Zone Append Size Limit: 0 00:08:20.536 00:08:20.536 00:08:20.536 Active Namespaces 00:08:20.536 ================= 00:08:20.536 Namespace ID:1 00:08:20.536 Error Recovery Timeout: Unlimited 00:08:20.536 Command Set Identifier: NVM (00h) 00:08:20.536 Deallocate: Supported 00:08:20.536 Deallocated/Unwritten Error: Supported 00:08:20.536 Deallocated Read Value: All 0x00 00:08:20.536 Deallocate in Write Zeroes: Not Supported 00:08:20.536 Deallocated Guard Field: 0xFFFF 00:08:20.536 Flush: Supported 00:08:20.536 Reservation: Not Supported 00:08:20.536 Namespace Sharing Capabilities: Private 00:08:20.536 Size (in LBAs): 1310720 (5GiB) 00:08:20.536 Capacity (in LBAs): 1310720 (5GiB) 00:08:20.536 Utilization (in LBAs): 1310720 (5GiB) 00:08:20.536 Thin Provisioning: Not Supported 00:08:20.536 Per-NS Atomic Units: No 00:08:20.536 Maximum Single Source Range Length: 128 00:08:20.536 Maximum Copy Length: 128 00:08:20.536 Maximum Source Range Count: 128 00:08:20.536 NGUID/EUI64 Never Reused: No 00:08:20.536 Namespace Write Protected: No 00:08:20.536 Number of LBA Formats: 8 00:08:20.536 Current LBA Format: LBA Format #04 00:08:20.536 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.536 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.536 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.536 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.536 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.536 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.536 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.536 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.536 00:08:20.536 NVM Specific Namespace Data 00:08:20.536 =========================== 00:08:20.536 Logical Block Storage Tag Mask: 0 00:08:20.536 Protection Information Capabilities: 00:08:20.536 16b Guard Protection Information Storage Tag Support: No 00:08:20.536 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.536 Storage Tag Check Read Support: No 00:08:20.536 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.536 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.536 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.536 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.537 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.537 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.537 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.537 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.537 ===================================================== 00:08:20.537 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:20.537 ===================================================== 00:08:20.537 Controller Capabilities/Features 00:08:20.537 ================================ 00:08:20.537 Vendor ID: 1b36 00:08:20.537 Subsystem Vendor ID: 1af4 00:08:20.537 Serial Number: 12342 00:08:20.537 Model Number: QEMU NVMe Ctrl 00:08:20.537 Firmware Version: 8.0.0 00:08:20.537 Recommended Arb Burst: 6 00:08:20.537 IEEE OUI Identifier: 00 54 52 00:08:20.537 Multi-path I/O 00:08:20.537 May have multiple subsystem ports: No 00:08:20.537 May have multiple controllers: No 00:08:20.537 Associated with SR-IOV VF: No 00:08:20.537 Max Data Transfer Size: 524288 00:08:20.537 Max Number of Namespaces: 256 00:08:20.537 Max Number of I/O Queues: 64 00:08:20.537 NVMe Specification Version (VS): 1.4 00:08:20.537 NVMe Specification Version (Identify): 1.4 00:08:20.537 Maximum Queue Entries: 2048 00:08:20.537 Contiguous Queues Required: Yes 00:08:20.537 Arbitration Mechanisms Supported 00:08:20.537 Weighted Round Robin: Not Supported 00:08:20.537 Vendor Specific: Not Supported 00:08:20.537 Reset Timeout: 7500 ms 00:08:20.537 Doorbell Stride: 4 bytes 00:08:20.537 NVM Subsystem Reset: Not Supported 00:08:20.537 Command Sets Supported 00:08:20.537 NVM Command Set: Supported 00:08:20.537 Boot Partition: Not Supported 00:08:20.537 Memory Page Size Minimum: 4096 bytes 00:08:20.537 Memory Page Size Maximum: 65536 bytes 00:08:20.537 Persistent Memory Region: Not Supported 00:08:20.537 Optional Asynchronous Events Supported 00:08:20.537 Namespace Attribute Notices: Supported 00:08:20.537 Firmware Activation Notices: Not Supported 00:08:20.537 ANA Change Notices: Not Supported 00:08:20.537 PLE Aggregate Log Change Notices: Not Supported 00:08:20.537 LBA Status Info Alert Notices: Not Supported 00:08:20.537 EGE Aggregate Log Change Notices: Not Supported 00:08:20.537 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.537 Zone Descriptor Change Notices: Not Supported 00:08:20.537 Discovery Log Change Notices: Not Supported 00:08:20.537 Controller Attributes 00:08:20.537 128-bit Host Identifier: Not Supported 00:08:20.537 Non-Operational Permissive Mode: Not Supported 00:08:20.537 NVM Sets: Not Supported 00:08:20.537 Read Recovery Levels: Not Supported 00:08:20.537 Endurance Groups: Not Supported 00:08:20.537 Predictable Latency Mode: Not Supported 00:08:20.537 Traffic Based Keep ALive: Not Supported 00:08:20.537 Namespace Granularity: Not Supported 00:08:20.537 SQ Associations: Not Supported 00:08:20.537 UUID List: Not Supported 00:08:20.537 Multi-Domain Subsystem: Not Supported 00:08:20.537 Fixed Capacity Management: Not Supported 00:08:20.537 Variable Capacity Management: Not Supported 00:08:20.537 Delete Endurance Group: Not Supported 00:08:20.537 Delete NVM Set: Not Supported 00:08:20.537 Extended LBA Formats Supported: Supported 00:08:20.537 Flexible Data Placement Supported: Not Supported 00:08:20.537 00:08:20.537 Controller Memory Buffer Support 00:08:20.537 ================================ 00:08:20.537 Supported: No 00:08:20.537 00:08:20.537 Persistent Memory Region Support 00:08:20.537 ================================ 00:08:20.537 Supported: No 00:08:20.537 00:08:20.537 Admin Command Set Attributes 00:08:20.537 ============================ 00:08:20.537 Security Send/Receive: Not Supported 00:08:20.537 Format NVM: Supported 00:08:20.537 Firmware Activate/Download: Not Supported 00:08:20.537 Namespace Management: Supported 00:08:20.537 Device Self-Test: Not Supported 00:08:20.537 Directives: Supported 00:08:20.537 NVMe-MI: Not Supported 00:08:20.537 Virtualization Management: Not Supported 00:08:20.537 Doorbell Buffer Config: Supported 00:08:20.537 Get LBA Status Capability: Not Supported 00:08:20.537 Command & Feature Lockdown Capability: Not Supported 00:08:20.537 Abort Command Limit: 4 00:08:20.537 Async Event Request Limit: 4 00:08:20.537 Number of Firmware Slots: N/A 00:08:20.537 Firmware Slot 1 Read-Only: N/A 00:08:20.537 Firmware Activation Without Reset: N/A 00:08:20.537 Multiple Update Detection Support: N/A 00:08:20.537 Firmware Update Granularity: No Information Provided 00:08:20.537 Per-Namespace SMART Log: Yes 00:08:20.537 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.537 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:20.537 Command Effects Log Page: Supported 00:08:20.537 Get Log Page Extended Data: Supported 00:08:20.537 Telemetry Log Pages: Not Supported 00:08:20.537 Persistent Event Log Pages: Not Supported 00:08:20.537 Supported Log Pages Log Page: May Support 00:08:20.537 Commands Supported & Effects Log Page: Not Supported 00:08:20.537 Feature Identifiers & Effects Log Page:May Support 00:08:20.537 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.537 Data Area 4 for Telemetry Log: Not Supported 00:08:20.537 Error Log Page Entries Supported: 1 00:08:20.537 Keep Alive: Not Supported 00:08:20.537 00:08:20.537 NVM Command Set Attributes 00:08:20.537 ========================== 00:08:20.537 Submission Queue Entry Size 00:08:20.537 Max: 64 00:08:20.537 Min: 64 00:08:20.537 Completion Queue Entry Size 00:08:20.537 Max: 16 00:08:20.537 Min: 16 00:08:20.537 Number of Namespaces: 256 00:08:20.537 Compare Command: Supported 00:08:20.537 Write Uncorrectable Command: Not Supported 00:08:20.537 Dataset Management Command: Supported 00:08:20.537 Write Zeroes Command: Supported 00:08:20.537 Set Features Save Field: Supported 00:08:20.537 Reservations: Not Supported 00:08:20.537 Timestamp: Supported 00:08:20.537 Copy: Supported 00:08:20.537 Volatile Write Cache: Present 00:08:20.537 Atomic Write Unit (Normal): 1 00:08:20.537 Atomic Write Unit (PFail): 1 00:08:20.537 Atomic Compare & Write Unit: 1 00:08:20.537 Fused Compare & Write: Not Supported 00:08:20.537 Scatter-Gather List 00:08:20.537 SGL Command Set: Supported 00:08:20.537 SGL Keyed: Not Supported 00:08:20.537 SGL Bit Bucket Descriptor: Not Supported 00:08:20.537 SGL Metadata Pointer: Not Supported 00:08:20.537 Oversized SGL: Not Supported 00:08:20.537 SGL Metadata Address: Not Supported 00:08:20.537 SGL Offset: Not Supported 00:08:20.537 Transport SGL Data Block: Not Supported 00:08:20.537 Replay Protected Memory Block: Not Supported 00:08:20.537 00:08:20.537 Firmware Slot Information 00:08:20.537 ========================= 00:08:20.537 Active slot: 1 00:08:20.537 Slot 1 Firmware Revision: 1.0 00:08:20.537 00:08:20.537 00:08:20.537 Commands Supported and Effects 00:08:20.537 ============================== 00:08:20.537 Admin Commands 00:08:20.537 -------------- 00:08:20.537 Delete I/O Submission Queue (00h): Supported 00:08:20.537 Create I/O Submission Queue (01h): Supported 00:08:20.537 Get Log Page (02h): Supported 00:08:20.537 Delete I/O Completion Queue (04h): Supported 00:08:20.537 Create I/O Completion Queue (05h): Supported 00:08:20.537 Identify (06h): Supported 00:08:20.537 Abort (08h): Supported 00:08:20.537 Set Features (09h): Supported 00:08:20.537 Get Features (0Ah): Supported 00:08:20.537 Asynchronous Event Request (0Ch): Supported 00:08:20.538 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.538 Directive Send (19h): Supported 00:08:20.538 Directive Receive (1Ah): Supported 00:08:20.538 Virtualization Management (1Ch): Supported 00:08:20.538 Doorbell Buffer Config (7Ch): Supported 00:08:20.538 Format NVM (80h): Supported LBA-Change 00:08:20.538 I/O Commands 00:08:20.538 ------------ 00:08:20.538 Flush (00h): Supported LBA-Change 00:08:20.538 Write (01h): Supported LBA-Change 00:08:20.538 Read (02h): Supported 00:08:20.538 Compare (05h): Supported 00:08:20.538 Write Zeroes (08h): Supported LBA-Change 00:08:20.538 Dataset Management (09h): Supported LBA-Change 00:08:20.538 Unknown (0Ch): Supported 00:08:20.538 Unknown (12h): Supported 00:08:20.538 Copy (19h): Supported LBA-Change 00:08:20.538 Unknown (1Dh): Supported LBA-Change 00:08:20.538 00:08:20.538 Error Log 00:08:20.538 ========= 00:08:20.538 00:08:20.538 Arbitration 00:08:20.538 =========== 00:08:20.538 Arbitration Burst: no limit 00:08:20.538 00:08:20.538 Power Management 00:08:20.538 ================ 00:08:20.538 Number of Power States: 1 00:08:20.538 Current Power State: Power State #0 00:08:20.538 Power State #0: 00:08:20.538 Max Power: 25.00 W 00:08:20.538 Non-Operational State: Operational 00:08:20.538 Entry Latency: 16 microseconds 00:08:20.538 Exit Latency: 4 microseconds 00:08:20.538 Relative Read Throughput: 0 00:08:20.538 Relative Read Latency: 0 00:08:20.538 Relative Write Throughput: 0 00:08:20.538 Relative Write Latency: 0 00:08:20.538 Idle Power: Not Reported 00:08:20.538 Active Power: Not Reported 00:08:20.538 Non-Operational Permissive Mode: Not Supported 00:08:20.538 00:08:20.538 Health Information 00:08:20.538 ================== 00:08:20.538 Critical Warnings: 00:08:20.538 Available Spare Space: OK 00:08:20.538 Temperature: OK 00:08:20.538 Device Reliability: OK 00:08:20.538 Read Only: No 00:08:20.538 Volatile Memory Backup: OK 00:08:20.538 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.538 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.538 Available Spare: 0% 00:08:20.538 Available Spare Threshold: 0% 00:08:20.538 Life Percentage Used: 0% 00:08:20.538 Data Units Read: 2216 00:08:20.538 Data Units Written: 2004 00:08:20.538 Host Read Commands: 113976 00:08:20.538 Host Write Commands: 112245 00:08:20.538 Controller Busy Time: 0 minutes 00:08:20.538 Power Cycles: 0 00:08:20.538 Power On Hours: 0 hours 00:08:20.538 Unsafe Shutdowns: 0 00:08:20.538 Unrecoverable Media Errors: 0 00:08:20.538 Lifetime Error Log Entries: 0 00:08:20.538 Warning Temperature Time: 0 minutes 00:08:20.538 Critical Temperature Time: 0 minutes 00:08:20.538 00:08:20.538 Number of Queues 00:08:20.538 ================ 00:08:20.538 Number of I/O Submission Queues: 64 00:08:20.538 Number of I/O Completion Queues: 64 00:08:20.538 00:08:20.538 ZNS Specific Controller Data 00:08:20.538 ============================ 00:08:20.538 Zone Append Size Limit: 0 00:08:20.538 00:08:20.538 00:08:20.538 Active Namespaces 00:08:20.538 ================= 00:08:20.538 Namespace ID:1 00:08:20.538 Error Recovery Timeout: Unlimited 00:08:20.538 Command Set Identifier: NVM (00h) 00:08:20.538 Deallocate: Supported 00:08:20.538 Deallocated/Unwritten Error: Supported 00:08:20.538 Deallocated Read Value: All 0x00 00:08:20.538 Deallocate in Write Zeroes: Not Supported 00:08:20.538 Deallocated Guard Field: 0xFFFF 00:08:20.538 Flush: Supported 00:08:20.538 Reservation: Not Supported 00:08:20.538 Namespace Sharing Capabilities: Private 00:08:20.538 Size (in LBAs): 1048576 (4GiB) 00:08:20.538 Capacity (in LBAs): 1048576 (4GiB) 00:08:20.538 Utilization (in LBAs): 1048576 (4GiB) 00:08:20.538 Thin Provisioning: Not Supported 00:08:20.538 Per-NS Atomic Units: No 00:08:20.538 Maximum Single Source Range Length: 128 00:08:20.538 Maximum Copy Length: 128 00:08:20.538 Maximum Source Range Count: 128 00:08:20.538 NGUID/EUI64 Never Reused: No 00:08:20.538 Namespace Write Protected: No 00:08:20.538 Number of LBA Formats: 8 00:08:20.538 Current LBA Format: LBA Format #04 00:08:20.538 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.538 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.538 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.538 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.538 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.538 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.538 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.538 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.538 00:08:20.538 NVM Specific Namespace Data 00:08:20.538 =========================== 00:08:20.538 Logical Block Storage Tag Mask: 0 00:08:20.538 Protection Information Capabilities: 00:08:20.538 16b Guard Protection Information Storage Tag Support: No 00:08:20.538 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.538 Storage Tag Check Read Support: No 00:08:20.538 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.538 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.538 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.538 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.538 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.538 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.538 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.538 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.538 Namespace ID:2 00:08:20.538 Error Recovery Timeout: Unlimited 00:08:20.538 Command Set Identifier: NVM (00h) 00:08:20.538 Deallocate: Supported 00:08:20.538 Deallocated/Unwritten Error: Supported 00:08:20.538 Deallocated Read Value: All 0x00 00:08:20.538 Deallocate in Write Zeroes: Not Supported 00:08:20.538 Deallocated Guard Field: 0xFFFF 00:08:20.538 Flush: Supported 00:08:20.538 Reservation: Not Supported 00:08:20.538 Namespace Sharing Capabilities: Private 00:08:20.538 Size (in LBAs): 1048576 (4GiB) 00:08:20.538 Capacity (in LBAs): 1048576 (4GiB) 00:08:20.538 Utilization (in LBAs): 1048576 (4GiB) 00:08:20.538 Thin Provisioning: Not Supported 00:08:20.538 Per-NS Atomic Units: No 00:08:20.538 Maximum Single Source Range Length: 128 00:08:20.538 Maximum Copy Length: 128 00:08:20.538 Maximum Source Range Count: 128 00:08:20.538 NGUID/EUI64 Never Reused: No 00:08:20.538 Namespace Write Protected: No 00:08:20.538 Number of LBA Formats: 8 00:08:20.538 Current LBA Format: LBA Format #04 00:08:20.538 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.538 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.538 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.538 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.538 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.538 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.538 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.538 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.538 00:08:20.538 NVM Specific Namespace Data 00:08:20.538 =========================== 00:08:20.538 Logical Block Storage Tag Mask: 0 00:08:20.538 Protection Information Capabilities: 00:08:20.538 16b Guard Protection Information Storage Tag Support: No 00:08:20.538 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.538 Storage Tag Check Read Support: No 00:08:20.538 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.538 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.538 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.538 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.538 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.538 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.538 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.538 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.538 Namespace ID:3 00:08:20.538 Error Recovery Timeout: Unlimited 00:08:20.538 Command Set Identifier: NVM (00h) 00:08:20.538 Deallocate: Supported 00:08:20.538 Deallocated/Unwritten Error: Supported 00:08:20.538 Deallocated Read Value: All 0x00 00:08:20.538 Deallocate in Write Zeroes: Not Supported 00:08:20.538 Deallocated Guard Field: 0xFFFF 00:08:20.538 Flush: Supported 00:08:20.538 Reservation: Not Supported 00:08:20.538 Namespace Sharing Capabilities: Private 00:08:20.538 Size (in LBAs): 1048576 (4GiB) 00:08:20.538 Capacity (in LBAs): 1048576 (4GiB) 00:08:20.538 Utilization (in LBAs): 1048576 (4GiB) 00:08:20.538 Thin Provisioning: Not Supported 00:08:20.538 Per-NS Atomic Units: No 00:08:20.538 Maximum Single Source Range Length: 128 00:08:20.538 Maximum Copy Length: 128 00:08:20.538 Maximum Source Range Count: 128 00:08:20.538 NGUID/EUI64 Never Reused: No 00:08:20.538 Namespace Write Protected: No 00:08:20.538 Number of LBA Formats: 8 00:08:20.538 Current LBA Format: LBA Format #04 00:08:20.539 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.539 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.539 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.539 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.539 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.539 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.539 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.539 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.539 00:08:20.539 NVM Specific Namespace Data 00:08:20.539 =========================== 00:08:20.539 Logical Block Storage Tag Mask: 0 00:08:20.539 Protection Information Capabilities: 00:08:20.539 16b Guard Protection Information Storage Tag Support: No 00:08:20.539 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.539 Storage Tag Check Read Support: No 00:08:20.539 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.539 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.539 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.539 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.539 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.539 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.539 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.539 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.539 10:13:59 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:20.539 10:13:59 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:20.800 ===================================================== 00:08:20.800 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:20.800 ===================================================== 00:08:20.800 Controller Capabilities/Features 00:08:20.800 ================================ 00:08:20.800 Vendor ID: 1b36 00:08:20.800 Subsystem Vendor ID: 1af4 00:08:20.800 Serial Number: 12340 00:08:20.800 Model Number: QEMU NVMe Ctrl 00:08:20.800 Firmware Version: 8.0.0 00:08:20.801 Recommended Arb Burst: 6 00:08:20.801 IEEE OUI Identifier: 00 54 52 00:08:20.801 Multi-path I/O 00:08:20.801 May have multiple subsystem ports: No 00:08:20.801 May have multiple controllers: No 00:08:20.801 Associated with SR-IOV VF: No 00:08:20.801 Max Data Transfer Size: 524288 00:08:20.801 Max Number of Namespaces: 256 00:08:20.801 Max Number of I/O Queues: 64 00:08:20.801 NVMe Specification Version (VS): 1.4 00:08:20.801 NVMe Specification Version (Identify): 1.4 00:08:20.801 Maximum Queue Entries: 2048 00:08:20.801 Contiguous Queues Required: Yes 00:08:20.801 Arbitration Mechanisms Supported 00:08:20.801 Weighted Round Robin: Not Supported 00:08:20.801 Vendor Specific: Not Supported 00:08:20.801 Reset Timeout: 7500 ms 00:08:20.801 Doorbell Stride: 4 bytes 00:08:20.801 NVM Subsystem Reset: Not Supported 00:08:20.801 Command Sets Supported 00:08:20.801 NVM Command Set: Supported 00:08:20.801 Boot Partition: Not Supported 00:08:20.801 Memory Page Size Minimum: 4096 bytes 00:08:20.801 Memory Page Size Maximum: 65536 bytes 00:08:20.801 Persistent Memory Region: Not Supported 00:08:20.801 Optional Asynchronous Events Supported 00:08:20.801 Namespace Attribute Notices: Supported 00:08:20.801 Firmware Activation Notices: Not Supported 00:08:20.801 ANA Change Notices: Not Supported 00:08:20.801 PLE Aggregate Log Change Notices: Not Supported 00:08:20.801 LBA Status Info Alert Notices: Not Supported 00:08:20.801 EGE Aggregate Log Change Notices: Not Supported 00:08:20.801 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.801 Zone Descriptor Change Notices: Not Supported 00:08:20.801 Discovery Log Change Notices: Not Supported 00:08:20.801 Controller Attributes 00:08:20.801 128-bit Host Identifier: Not Supported 00:08:20.801 Non-Operational Permissive Mode: Not Supported 00:08:20.801 NVM Sets: Not Supported 00:08:20.801 Read Recovery Levels: Not Supported 00:08:20.801 Endurance Groups: Not Supported 00:08:20.801 Predictable Latency Mode: Not Supported 00:08:20.801 Traffic Based Keep ALive: Not Supported 00:08:20.801 Namespace Granularity: Not Supported 00:08:20.801 SQ Associations: Not Supported 00:08:20.801 UUID List: Not Supported 00:08:20.801 Multi-Domain Subsystem: Not Supported 00:08:20.801 Fixed Capacity Management: Not Supported 00:08:20.801 Variable Capacity Management: Not Supported 00:08:20.801 Delete Endurance Group: Not Supported 00:08:20.801 Delete NVM Set: Not Supported 00:08:20.801 Extended LBA Formats Supported: Supported 00:08:20.801 Flexible Data Placement Supported: Not Supported 00:08:20.801 00:08:20.801 Controller Memory Buffer Support 00:08:20.801 ================================ 00:08:20.801 Supported: No 00:08:20.801 00:08:20.801 Persistent Memory Region Support 00:08:20.801 ================================ 00:08:20.801 Supported: No 00:08:20.801 00:08:20.801 Admin Command Set Attributes 00:08:20.801 ============================ 00:08:20.801 Security Send/Receive: Not Supported 00:08:20.801 Format NVM: Supported 00:08:20.801 Firmware Activate/Download: Not Supported 00:08:20.801 Namespace Management: Supported 00:08:20.801 Device Self-Test: Not Supported 00:08:20.801 Directives: Supported 00:08:20.801 NVMe-MI: Not Supported 00:08:20.801 Virtualization Management: Not Supported 00:08:20.801 Doorbell Buffer Config: Supported 00:08:20.801 Get LBA Status Capability: Not Supported 00:08:20.801 Command & Feature Lockdown Capability: Not Supported 00:08:20.801 Abort Command Limit: 4 00:08:20.801 Async Event Request Limit: 4 00:08:20.801 Number of Firmware Slots: N/A 00:08:20.801 Firmware Slot 1 Read-Only: N/A 00:08:20.801 Firmware Activation Without Reset: N/A 00:08:20.801 Multiple Update Detection Support: N/A 00:08:20.801 Firmware Update Granularity: No Information Provided 00:08:20.801 Per-Namespace SMART Log: Yes 00:08:20.801 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.801 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:20.801 Command Effects Log Page: Supported 00:08:20.801 Get Log Page Extended Data: Supported 00:08:20.801 Telemetry Log Pages: Not Supported 00:08:20.801 Persistent Event Log Pages: Not Supported 00:08:20.801 Supported Log Pages Log Page: May Support 00:08:20.801 Commands Supported & Effects Log Page: Not Supported 00:08:20.801 Feature Identifiers & Effects Log Page:May Support 00:08:20.801 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.801 Data Area 4 for Telemetry Log: Not Supported 00:08:20.801 Error Log Page Entries Supported: 1 00:08:20.801 Keep Alive: Not Supported 00:08:20.801 00:08:20.801 NVM Command Set Attributes 00:08:20.801 ========================== 00:08:20.801 Submission Queue Entry Size 00:08:20.801 Max: 64 00:08:20.801 Min: 64 00:08:20.801 Completion Queue Entry Size 00:08:20.801 Max: 16 00:08:20.801 Min: 16 00:08:20.801 Number of Namespaces: 256 00:08:20.801 Compare Command: Supported 00:08:20.801 Write Uncorrectable Command: Not Supported 00:08:20.801 Dataset Management Command: Supported 00:08:20.801 Write Zeroes Command: Supported 00:08:20.801 Set Features Save Field: Supported 00:08:20.801 Reservations: Not Supported 00:08:20.801 Timestamp: Supported 00:08:20.801 Copy: Supported 00:08:20.801 Volatile Write Cache: Present 00:08:20.801 Atomic Write Unit (Normal): 1 00:08:20.801 Atomic Write Unit (PFail): 1 00:08:20.801 Atomic Compare & Write Unit: 1 00:08:20.801 Fused Compare & Write: Not Supported 00:08:20.801 Scatter-Gather List 00:08:20.801 SGL Command Set: Supported 00:08:20.801 SGL Keyed: Not Supported 00:08:20.801 SGL Bit Bucket Descriptor: Not Supported 00:08:20.801 SGL Metadata Pointer: Not Supported 00:08:20.801 Oversized SGL: Not Supported 00:08:20.801 SGL Metadata Address: Not Supported 00:08:20.801 SGL Offset: Not Supported 00:08:20.801 Transport SGL Data Block: Not Supported 00:08:20.801 Replay Protected Memory Block: Not Supported 00:08:20.801 00:08:20.801 Firmware Slot Information 00:08:20.801 ========================= 00:08:20.801 Active slot: 1 00:08:20.801 Slot 1 Firmware Revision: 1.0 00:08:20.801 00:08:20.801 00:08:20.801 Commands Supported and Effects 00:08:20.801 ============================== 00:08:20.801 Admin Commands 00:08:20.801 -------------- 00:08:20.801 Delete I/O Submission Queue (00h): Supported 00:08:20.801 Create I/O Submission Queue (01h): Supported 00:08:20.801 Get Log Page (02h): Supported 00:08:20.801 Delete I/O Completion Queue (04h): Supported 00:08:20.801 Create I/O Completion Queue (05h): Supported 00:08:20.801 Identify (06h): Supported 00:08:20.801 Abort (08h): Supported 00:08:20.801 Set Features (09h): Supported 00:08:20.801 Get Features (0Ah): Supported 00:08:20.801 Asynchronous Event Request (0Ch): Supported 00:08:20.801 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.801 Directive Send (19h): Supported 00:08:20.801 Directive Receive (1Ah): Supported 00:08:20.801 Virtualization Management (1Ch): Supported 00:08:20.801 Doorbell Buffer Config (7Ch): Supported 00:08:20.801 Format NVM (80h): Supported LBA-Change 00:08:20.801 I/O Commands 00:08:20.801 ------------ 00:08:20.801 Flush (00h): Supported LBA-Change 00:08:20.801 Write (01h): Supported LBA-Change 00:08:20.801 Read (02h): Supported 00:08:20.801 Compare (05h): Supported 00:08:20.801 Write Zeroes (08h): Supported LBA-Change 00:08:20.801 Dataset Management (09h): Supported LBA-Change 00:08:20.801 Unknown (0Ch): Supported 00:08:20.801 Unknown (12h): Supported 00:08:20.801 Copy (19h): Supported LBA-Change 00:08:20.801 Unknown (1Dh): Supported LBA-Change 00:08:20.801 00:08:20.801 Error Log 00:08:20.801 ========= 00:08:20.801 00:08:20.801 Arbitration 00:08:20.801 =========== 00:08:20.801 Arbitration Burst: no limit 00:08:20.801 00:08:20.801 Power Management 00:08:20.802 ================ 00:08:20.802 Number of Power States: 1 00:08:20.802 Current Power State: Power State #0 00:08:20.802 Power State #0: 00:08:20.802 Max Power: 25.00 W 00:08:20.802 Non-Operational State: Operational 00:08:20.802 Entry Latency: 16 microseconds 00:08:20.802 Exit Latency: 4 microseconds 00:08:20.802 Relative Read Throughput: 0 00:08:20.802 Relative Read Latency: 0 00:08:20.802 Relative Write Throughput: 0 00:08:20.802 Relative Write Latency: 0 00:08:20.802 Idle Power: Not Reported 00:08:20.802 Active Power: Not Reported 00:08:20.802 Non-Operational Permissive Mode: Not Supported 00:08:20.802 00:08:20.802 Health Information 00:08:20.802 ================== 00:08:20.802 Critical Warnings: 00:08:20.802 Available Spare Space: OK 00:08:20.802 Temperature: OK 00:08:20.802 Device Reliability: OK 00:08:20.802 Read Only: No 00:08:20.802 Volatile Memory Backup: OK 00:08:20.802 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.802 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.802 Available Spare: 0% 00:08:20.802 Available Spare Threshold: 0% 00:08:20.802 Life Percentage Used: 0% 00:08:20.802 Data Units Read: 673 00:08:20.802 Data Units Written: 601 00:08:20.802 Host Read Commands: 37196 00:08:20.802 Host Write Commands: 36982 00:08:20.802 Controller Busy Time: 0 minutes 00:08:20.802 Power Cycles: 0 00:08:20.802 Power On Hours: 0 hours 00:08:20.802 Unsafe Shutdowns: 0 00:08:20.802 Unrecoverable Media Errors: 0 00:08:20.802 Lifetime Error Log Entries: 0 00:08:20.802 Warning Temperature Time: 0 minutes 00:08:20.802 Critical Temperature Time: 0 minutes 00:08:20.802 00:08:20.802 Number of Queues 00:08:20.802 ================ 00:08:20.802 Number of I/O Submission Queues: 64 00:08:20.802 Number of I/O Completion Queues: 64 00:08:20.802 00:08:20.802 ZNS Specific Controller Data 00:08:20.802 ============================ 00:08:20.802 Zone Append Size Limit: 0 00:08:20.802 00:08:20.802 00:08:20.802 Active Namespaces 00:08:20.802 ================= 00:08:20.802 Namespace ID:1 00:08:20.802 Error Recovery Timeout: Unlimited 00:08:20.802 Command Set Identifier: NVM (00h) 00:08:20.802 Deallocate: Supported 00:08:20.802 Deallocated/Unwritten Error: Supported 00:08:20.802 Deallocated Read Value: All 0x00 00:08:20.802 Deallocate in Write Zeroes: Not Supported 00:08:20.802 Deallocated Guard Field: 0xFFFF 00:08:20.802 Flush: Supported 00:08:20.802 Reservation: Not Supported 00:08:20.802 Metadata Transferred as: Separate Metadata Buffer 00:08:20.802 Namespace Sharing Capabilities: Private 00:08:20.802 Size (in LBAs): 1548666 (5GiB) 00:08:20.802 Capacity (in LBAs): 1548666 (5GiB) 00:08:20.802 Utilization (in LBAs): 1548666 (5GiB) 00:08:20.802 Thin Provisioning: Not Supported 00:08:20.802 Per-NS Atomic Units: No 00:08:20.802 Maximum Single Source Range Length: 128 00:08:20.802 Maximum Copy Length: 128 00:08:20.802 Maximum Source Range Count: 128 00:08:20.802 NGUID/EUI64 Never Reused: No 00:08:20.802 Namespace Write Protected: No 00:08:20.802 Number of LBA Formats: 8 00:08:20.802 Current LBA Format: LBA Format #07 00:08:20.802 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.802 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.802 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.802 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.802 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.802 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.802 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.802 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.802 00:08:20.802 NVM Specific Namespace Data 00:08:20.802 =========================== 00:08:20.802 Logical Block Storage Tag Mask: 0 00:08:20.802 Protection Information Capabilities: 00:08:20.802 16b Guard Protection Information Storage Tag Support: No 00:08:20.802 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.802 Storage Tag Check Read Support: No 00:08:20.802 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.802 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.802 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.802 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.802 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.802 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.802 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.802 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.802 10:14:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:20.802 10:14:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:21.065 ===================================================== 00:08:21.065 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:21.065 ===================================================== 00:08:21.065 Controller Capabilities/Features 00:08:21.065 ================================ 00:08:21.065 Vendor ID: 1b36 00:08:21.065 Subsystem Vendor ID: 1af4 00:08:21.065 Serial Number: 12341 00:08:21.065 Model Number: QEMU NVMe Ctrl 00:08:21.065 Firmware Version: 8.0.0 00:08:21.065 Recommended Arb Burst: 6 00:08:21.065 IEEE OUI Identifier: 00 54 52 00:08:21.065 Multi-path I/O 00:08:21.065 May have multiple subsystem ports: No 00:08:21.065 May have multiple controllers: No 00:08:21.065 Associated with SR-IOV VF: No 00:08:21.065 Max Data Transfer Size: 524288 00:08:21.065 Max Number of Namespaces: 256 00:08:21.065 Max Number of I/O Queues: 64 00:08:21.065 NVMe Specification Version (VS): 1.4 00:08:21.065 NVMe Specification Version (Identify): 1.4 00:08:21.065 Maximum Queue Entries: 2048 00:08:21.065 Contiguous Queues Required: Yes 00:08:21.065 Arbitration Mechanisms Supported 00:08:21.065 Weighted Round Robin: Not Supported 00:08:21.065 Vendor Specific: Not Supported 00:08:21.065 Reset Timeout: 7500 ms 00:08:21.065 Doorbell Stride: 4 bytes 00:08:21.065 NVM Subsystem Reset: Not Supported 00:08:21.065 Command Sets Supported 00:08:21.065 NVM Command Set: Supported 00:08:21.065 Boot Partition: Not Supported 00:08:21.065 Memory Page Size Minimum: 4096 bytes 00:08:21.065 Memory Page Size Maximum: 65536 bytes 00:08:21.065 Persistent Memory Region: Not Supported 00:08:21.065 Optional Asynchronous Events Supported 00:08:21.065 Namespace Attribute Notices: Supported 00:08:21.065 Firmware Activation Notices: Not Supported 00:08:21.065 ANA Change Notices: Not Supported 00:08:21.065 PLE Aggregate Log Change Notices: Not Supported 00:08:21.065 LBA Status Info Alert Notices: Not Supported 00:08:21.065 EGE Aggregate Log Change Notices: Not Supported 00:08:21.065 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.065 Zone Descriptor Change Notices: Not Supported 00:08:21.065 Discovery Log Change Notices: Not Supported 00:08:21.065 Controller Attributes 00:08:21.065 128-bit Host Identifier: Not Supported 00:08:21.065 Non-Operational Permissive Mode: Not Supported 00:08:21.065 NVM Sets: Not Supported 00:08:21.065 Read Recovery Levels: Not Supported 00:08:21.065 Endurance Groups: Not Supported 00:08:21.065 Predictable Latency Mode: Not Supported 00:08:21.065 Traffic Based Keep ALive: Not Supported 00:08:21.065 Namespace Granularity: Not Supported 00:08:21.065 SQ Associations: Not Supported 00:08:21.065 UUID List: Not Supported 00:08:21.065 Multi-Domain Subsystem: Not Supported 00:08:21.065 Fixed Capacity Management: Not Supported 00:08:21.065 Variable Capacity Management: Not Supported 00:08:21.065 Delete Endurance Group: Not Supported 00:08:21.065 Delete NVM Set: Not Supported 00:08:21.065 Extended LBA Formats Supported: Supported 00:08:21.065 Flexible Data Placement Supported: Not Supported 00:08:21.065 00:08:21.065 Controller Memory Buffer Support 00:08:21.065 ================================ 00:08:21.065 Supported: No 00:08:21.065 00:08:21.065 Persistent Memory Region Support 00:08:21.065 ================================ 00:08:21.065 Supported: No 00:08:21.065 00:08:21.065 Admin Command Set Attributes 00:08:21.065 ============================ 00:08:21.065 Security Send/Receive: Not Supported 00:08:21.065 Format NVM: Supported 00:08:21.065 Firmware Activate/Download: Not Supported 00:08:21.065 Namespace Management: Supported 00:08:21.065 Device Self-Test: Not Supported 00:08:21.065 Directives: Supported 00:08:21.065 NVMe-MI: Not Supported 00:08:21.065 Virtualization Management: Not Supported 00:08:21.065 Doorbell Buffer Config: Supported 00:08:21.065 Get LBA Status Capability: Not Supported 00:08:21.065 Command & Feature Lockdown Capability: Not Supported 00:08:21.065 Abort Command Limit: 4 00:08:21.065 Async Event Request Limit: 4 00:08:21.065 Number of Firmware Slots: N/A 00:08:21.065 Firmware Slot 1 Read-Only: N/A 00:08:21.065 Firmware Activation Without Reset: N/A 00:08:21.065 Multiple Update Detection Support: N/A 00:08:21.065 Firmware Update Granularity: No Information Provided 00:08:21.065 Per-Namespace SMART Log: Yes 00:08:21.065 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.065 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:21.065 Command Effects Log Page: Supported 00:08:21.065 Get Log Page Extended Data: Supported 00:08:21.065 Telemetry Log Pages: Not Supported 00:08:21.065 Persistent Event Log Pages: Not Supported 00:08:21.065 Supported Log Pages Log Page: May Support 00:08:21.065 Commands Supported & Effects Log Page: Not Supported 00:08:21.065 Feature Identifiers & Effects Log Page:May Support 00:08:21.065 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.065 Data Area 4 for Telemetry Log: Not Supported 00:08:21.065 Error Log Page Entries Supported: 1 00:08:21.065 Keep Alive: Not Supported 00:08:21.065 00:08:21.066 NVM Command Set Attributes 00:08:21.066 ========================== 00:08:21.066 Submission Queue Entry Size 00:08:21.066 Max: 64 00:08:21.066 Min: 64 00:08:21.066 Completion Queue Entry Size 00:08:21.066 Max: 16 00:08:21.066 Min: 16 00:08:21.066 Number of Namespaces: 256 00:08:21.066 Compare Command: Supported 00:08:21.066 Write Uncorrectable Command: Not Supported 00:08:21.066 Dataset Management Command: Supported 00:08:21.066 Write Zeroes Command: Supported 00:08:21.066 Set Features Save Field: Supported 00:08:21.066 Reservations: Not Supported 00:08:21.066 Timestamp: Supported 00:08:21.066 Copy: Supported 00:08:21.066 Volatile Write Cache: Present 00:08:21.066 Atomic Write Unit (Normal): 1 00:08:21.066 Atomic Write Unit (PFail): 1 00:08:21.066 Atomic Compare & Write Unit: 1 00:08:21.066 Fused Compare & Write: Not Supported 00:08:21.066 Scatter-Gather List 00:08:21.066 SGL Command Set: Supported 00:08:21.066 SGL Keyed: Not Supported 00:08:21.066 SGL Bit Bucket Descriptor: Not Supported 00:08:21.066 SGL Metadata Pointer: Not Supported 00:08:21.066 Oversized SGL: Not Supported 00:08:21.066 SGL Metadata Address: Not Supported 00:08:21.066 SGL Offset: Not Supported 00:08:21.066 Transport SGL Data Block: Not Supported 00:08:21.066 Replay Protected Memory Block: Not Supported 00:08:21.066 00:08:21.066 Firmware Slot Information 00:08:21.066 ========================= 00:08:21.066 Active slot: 1 00:08:21.066 Slot 1 Firmware Revision: 1.0 00:08:21.066 00:08:21.066 00:08:21.066 Commands Supported and Effects 00:08:21.066 ============================== 00:08:21.066 Admin Commands 00:08:21.066 -------------- 00:08:21.066 Delete I/O Submission Queue (00h): Supported 00:08:21.066 Create I/O Submission Queue (01h): Supported 00:08:21.066 Get Log Page (02h): Supported 00:08:21.066 Delete I/O Completion Queue (04h): Supported 00:08:21.066 Create I/O Completion Queue (05h): Supported 00:08:21.066 Identify (06h): Supported 00:08:21.066 Abort (08h): Supported 00:08:21.066 Set Features (09h): Supported 00:08:21.066 Get Features (0Ah): Supported 00:08:21.066 Asynchronous Event Request (0Ch): Supported 00:08:21.066 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.066 Directive Send (19h): Supported 00:08:21.066 Directive Receive (1Ah): Supported 00:08:21.066 Virtualization Management (1Ch): Supported 00:08:21.066 Doorbell Buffer Config (7Ch): Supported 00:08:21.066 Format NVM (80h): Supported LBA-Change 00:08:21.066 I/O Commands 00:08:21.066 ------------ 00:08:21.066 Flush (00h): Supported LBA-Change 00:08:21.066 Write (01h): Supported LBA-Change 00:08:21.066 Read (02h): Supported 00:08:21.066 Compare (05h): Supported 00:08:21.066 Write Zeroes (08h): Supported LBA-Change 00:08:21.066 Dataset Management (09h): Supported LBA-Change 00:08:21.066 Unknown (0Ch): Supported 00:08:21.066 Unknown (12h): Supported 00:08:21.066 Copy (19h): Supported LBA-Change 00:08:21.066 Unknown (1Dh): Supported LBA-Change 00:08:21.066 00:08:21.066 Error Log 00:08:21.066 ========= 00:08:21.066 00:08:21.066 Arbitration 00:08:21.066 =========== 00:08:21.066 Arbitration Burst: no limit 00:08:21.066 00:08:21.066 Power Management 00:08:21.066 ================ 00:08:21.066 Number of Power States: 1 00:08:21.066 Current Power State: Power State #0 00:08:21.066 Power State #0: 00:08:21.066 Max Power: 25.00 W 00:08:21.066 Non-Operational State: Operational 00:08:21.066 Entry Latency: 16 microseconds 00:08:21.066 Exit Latency: 4 microseconds 00:08:21.066 Relative Read Throughput: 0 00:08:21.066 Relative Read Latency: 0 00:08:21.066 Relative Write Throughput: 0 00:08:21.066 Relative Write Latency: 0 00:08:21.066 Idle Power: Not Reported 00:08:21.066 Active Power: Not Reported 00:08:21.066 Non-Operational Permissive Mode: Not Supported 00:08:21.066 00:08:21.066 Health Information 00:08:21.066 ================== 00:08:21.066 Critical Warnings: 00:08:21.066 Available Spare Space: OK 00:08:21.066 Temperature: OK 00:08:21.066 Device Reliability: OK 00:08:21.066 Read Only: No 00:08:21.066 Volatile Memory Backup: OK 00:08:21.066 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.066 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.066 Available Spare: 0% 00:08:21.066 Available Spare Threshold: 0% 00:08:21.066 Life Percentage Used: 0% 00:08:21.066 Data Units Read: 1068 00:08:21.066 Data Units Written: 934 00:08:21.066 Host Read Commands: 55565 00:08:21.066 Host Write Commands: 54349 00:08:21.066 Controller Busy Time: 0 minutes 00:08:21.066 Power Cycles: 0 00:08:21.066 Power On Hours: 0 hours 00:08:21.066 Unsafe Shutdowns: 0 00:08:21.066 Unrecoverable Media Errors: 0 00:08:21.066 Lifetime Error Log Entries: 0 00:08:21.066 Warning Temperature Time: 0 minutes 00:08:21.066 Critical Temperature Time: 0 minutes 00:08:21.066 00:08:21.066 Number of Queues 00:08:21.066 ================ 00:08:21.066 Number of I/O Submission Queues: 64 00:08:21.066 Number of I/O Completion Queues: 64 00:08:21.066 00:08:21.066 ZNS Specific Controller Data 00:08:21.066 ============================ 00:08:21.066 Zone Append Size Limit: 0 00:08:21.066 00:08:21.066 00:08:21.066 Active Namespaces 00:08:21.066 ================= 00:08:21.066 Namespace ID:1 00:08:21.066 Error Recovery Timeout: Unlimited 00:08:21.066 Command Set Identifier: NVM (00h) 00:08:21.066 Deallocate: Supported 00:08:21.066 Deallocated/Unwritten Error: Supported 00:08:21.066 Deallocated Read Value: All 0x00 00:08:21.066 Deallocate in Write Zeroes: Not Supported 00:08:21.066 Deallocated Guard Field: 0xFFFF 00:08:21.066 Flush: Supported 00:08:21.066 Reservation: Not Supported 00:08:21.066 Namespace Sharing Capabilities: Private 00:08:21.066 Size (in LBAs): 1310720 (5GiB) 00:08:21.066 Capacity (in LBAs): 1310720 (5GiB) 00:08:21.066 Utilization (in LBAs): 1310720 (5GiB) 00:08:21.066 Thin Provisioning: Not Supported 00:08:21.066 Per-NS Atomic Units: No 00:08:21.066 Maximum Single Source Range Length: 128 00:08:21.066 Maximum Copy Length: 128 00:08:21.066 Maximum Source Range Count: 128 00:08:21.066 NGUID/EUI64 Never Reused: No 00:08:21.066 Namespace Write Protected: No 00:08:21.066 Number of LBA Formats: 8 00:08:21.066 Current LBA Format: LBA Format #04 00:08:21.066 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.066 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.066 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.066 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.066 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.066 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.066 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.066 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.066 00:08:21.066 NVM Specific Namespace Data 00:08:21.066 =========================== 00:08:21.066 Logical Block Storage Tag Mask: 0 00:08:21.066 Protection Information Capabilities: 00:08:21.066 16b Guard Protection Information Storage Tag Support: No 00:08:21.066 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.066 Storage Tag Check Read Support: No 00:08:21.066 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.066 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.066 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.066 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.066 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.066 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.066 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.066 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.066 10:14:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:21.066 10:14:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:21.066 ===================================================== 00:08:21.066 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:21.066 ===================================================== 00:08:21.066 Controller Capabilities/Features 00:08:21.066 ================================ 00:08:21.066 Vendor ID: 1b36 00:08:21.066 Subsystem Vendor ID: 1af4 00:08:21.066 Serial Number: 12342 00:08:21.066 Model Number: QEMU NVMe Ctrl 00:08:21.066 Firmware Version: 8.0.0 00:08:21.066 Recommended Arb Burst: 6 00:08:21.066 IEEE OUI Identifier: 00 54 52 00:08:21.066 Multi-path I/O 00:08:21.066 May have multiple subsystem ports: No 00:08:21.066 May have multiple controllers: No 00:08:21.066 Associated with SR-IOV VF: No 00:08:21.066 Max Data Transfer Size: 524288 00:08:21.066 Max Number of Namespaces: 256 00:08:21.066 Max Number of I/O Queues: 64 00:08:21.066 NVMe Specification Version (VS): 1.4 00:08:21.066 NVMe Specification Version (Identify): 1.4 00:08:21.067 Maximum Queue Entries: 2048 00:08:21.067 Contiguous Queues Required: Yes 00:08:21.067 Arbitration Mechanisms Supported 00:08:21.067 Weighted Round Robin: Not Supported 00:08:21.067 Vendor Specific: Not Supported 00:08:21.067 Reset Timeout: 7500 ms 00:08:21.067 Doorbell Stride: 4 bytes 00:08:21.067 NVM Subsystem Reset: Not Supported 00:08:21.067 Command Sets Supported 00:08:21.067 NVM Command Set: Supported 00:08:21.067 Boot Partition: Not Supported 00:08:21.067 Memory Page Size Minimum: 4096 bytes 00:08:21.067 Memory Page Size Maximum: 65536 bytes 00:08:21.067 Persistent Memory Region: Not Supported 00:08:21.067 Optional Asynchronous Events Supported 00:08:21.067 Namespace Attribute Notices: Supported 00:08:21.067 Firmware Activation Notices: Not Supported 00:08:21.067 ANA Change Notices: Not Supported 00:08:21.067 PLE Aggregate Log Change Notices: Not Supported 00:08:21.067 LBA Status Info Alert Notices: Not Supported 00:08:21.067 EGE Aggregate Log Change Notices: Not Supported 00:08:21.067 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.067 Zone Descriptor Change Notices: Not Supported 00:08:21.067 Discovery Log Change Notices: Not Supported 00:08:21.067 Controller Attributes 00:08:21.067 128-bit Host Identifier: Not Supported 00:08:21.067 Non-Operational Permissive Mode: Not Supported 00:08:21.067 NVM Sets: Not Supported 00:08:21.067 Read Recovery Levels: Not Supported 00:08:21.067 Endurance Groups: Not Supported 00:08:21.067 Predictable Latency Mode: Not Supported 00:08:21.067 Traffic Based Keep ALive: Not Supported 00:08:21.067 Namespace Granularity: Not Supported 00:08:21.067 SQ Associations: Not Supported 00:08:21.067 UUID List: Not Supported 00:08:21.067 Multi-Domain Subsystem: Not Supported 00:08:21.067 Fixed Capacity Management: Not Supported 00:08:21.067 Variable Capacity Management: Not Supported 00:08:21.067 Delete Endurance Group: Not Supported 00:08:21.067 Delete NVM Set: Not Supported 00:08:21.067 Extended LBA Formats Supported: Supported 00:08:21.067 Flexible Data Placement Supported: Not Supported 00:08:21.067 00:08:21.067 Controller Memory Buffer Support 00:08:21.067 ================================ 00:08:21.067 Supported: No 00:08:21.067 00:08:21.067 Persistent Memory Region Support 00:08:21.067 ================================ 00:08:21.067 Supported: No 00:08:21.067 00:08:21.067 Admin Command Set Attributes 00:08:21.067 ============================ 00:08:21.067 Security Send/Receive: Not Supported 00:08:21.067 Format NVM: Supported 00:08:21.067 Firmware Activate/Download: Not Supported 00:08:21.067 Namespace Management: Supported 00:08:21.067 Device Self-Test: Not Supported 00:08:21.067 Directives: Supported 00:08:21.067 NVMe-MI: Not Supported 00:08:21.067 Virtualization Management: Not Supported 00:08:21.067 Doorbell Buffer Config: Supported 00:08:21.067 Get LBA Status Capability: Not Supported 00:08:21.067 Command & Feature Lockdown Capability: Not Supported 00:08:21.067 Abort Command Limit: 4 00:08:21.067 Async Event Request Limit: 4 00:08:21.067 Number of Firmware Slots: N/A 00:08:21.067 Firmware Slot 1 Read-Only: N/A 00:08:21.067 Firmware Activation Without Reset: N/A 00:08:21.067 Multiple Update Detection Support: N/A 00:08:21.067 Firmware Update Granularity: No Information Provided 00:08:21.067 Per-Namespace SMART Log: Yes 00:08:21.067 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.067 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:21.067 Command Effects Log Page: Supported 00:08:21.067 Get Log Page Extended Data: Supported 00:08:21.067 Telemetry Log Pages: Not Supported 00:08:21.067 Persistent Event Log Pages: Not Supported 00:08:21.067 Supported Log Pages Log Page: May Support 00:08:21.067 Commands Supported & Effects Log Page: Not Supported 00:08:21.067 Feature Identifiers & Effects Log Page:May Support 00:08:21.067 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.067 Data Area 4 for Telemetry Log: Not Supported 00:08:21.067 Error Log Page Entries Supported: 1 00:08:21.067 Keep Alive: Not Supported 00:08:21.067 00:08:21.067 NVM Command Set Attributes 00:08:21.067 ========================== 00:08:21.067 Submission Queue Entry Size 00:08:21.067 Max: 64 00:08:21.067 Min: 64 00:08:21.067 Completion Queue Entry Size 00:08:21.067 Max: 16 00:08:21.067 Min: 16 00:08:21.067 Number of Namespaces: 256 00:08:21.067 Compare Command: Supported 00:08:21.067 Write Uncorrectable Command: Not Supported 00:08:21.067 Dataset Management Command: Supported 00:08:21.067 Write Zeroes Command: Supported 00:08:21.067 Set Features Save Field: Supported 00:08:21.067 Reservations: Not Supported 00:08:21.067 Timestamp: Supported 00:08:21.067 Copy: Supported 00:08:21.067 Volatile Write Cache: Present 00:08:21.067 Atomic Write Unit (Normal): 1 00:08:21.067 Atomic Write Unit (PFail): 1 00:08:21.067 Atomic Compare & Write Unit: 1 00:08:21.067 Fused Compare & Write: Not Supported 00:08:21.067 Scatter-Gather List 00:08:21.067 SGL Command Set: Supported 00:08:21.067 SGL Keyed: Not Supported 00:08:21.067 SGL Bit Bucket Descriptor: Not Supported 00:08:21.067 SGL Metadata Pointer: Not Supported 00:08:21.067 Oversized SGL: Not Supported 00:08:21.067 SGL Metadata Address: Not Supported 00:08:21.067 SGL Offset: Not Supported 00:08:21.067 Transport SGL Data Block: Not Supported 00:08:21.067 Replay Protected Memory Block: Not Supported 00:08:21.067 00:08:21.067 Firmware Slot Information 00:08:21.067 ========================= 00:08:21.067 Active slot: 1 00:08:21.067 Slot 1 Firmware Revision: 1.0 00:08:21.067 00:08:21.067 00:08:21.067 Commands Supported and Effects 00:08:21.067 ============================== 00:08:21.067 Admin Commands 00:08:21.067 -------------- 00:08:21.067 Delete I/O Submission Queue (00h): Supported 00:08:21.067 Create I/O Submission Queue (01h): Supported 00:08:21.067 Get Log Page (02h): Supported 00:08:21.067 Delete I/O Completion Queue (04h): Supported 00:08:21.067 Create I/O Completion Queue (05h): Supported 00:08:21.067 Identify (06h): Supported 00:08:21.067 Abort (08h): Supported 00:08:21.067 Set Features (09h): Supported 00:08:21.067 Get Features (0Ah): Supported 00:08:21.067 Asynchronous Event Request (0Ch): Supported 00:08:21.067 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.067 Directive Send (19h): Supported 00:08:21.067 Directive Receive (1Ah): Supported 00:08:21.067 Virtualization Management (1Ch): Supported 00:08:21.067 Doorbell Buffer Config (7Ch): Supported 00:08:21.067 Format NVM (80h): Supported LBA-Change 00:08:21.067 I/O Commands 00:08:21.067 ------------ 00:08:21.067 Flush (00h): Supported LBA-Change 00:08:21.067 Write (01h): Supported LBA-Change 00:08:21.067 Read (02h): Supported 00:08:21.067 Compare (05h): Supported 00:08:21.067 Write Zeroes (08h): Supported LBA-Change 00:08:21.067 Dataset Management (09h): Supported LBA-Change 00:08:21.067 Unknown (0Ch): Supported 00:08:21.067 Unknown (12h): Supported 00:08:21.067 Copy (19h): Supported LBA-Change 00:08:21.067 Unknown (1Dh): Supported LBA-Change 00:08:21.067 00:08:21.067 Error Log 00:08:21.067 ========= 00:08:21.067 00:08:21.067 Arbitration 00:08:21.067 =========== 00:08:21.067 Arbitration Burst: no limit 00:08:21.067 00:08:21.067 Power Management 00:08:21.067 ================ 00:08:21.067 Number of Power States: 1 00:08:21.067 Current Power State: Power State #0 00:08:21.067 Power State #0: 00:08:21.067 Max Power: 25.00 W 00:08:21.067 Non-Operational State: Operational 00:08:21.067 Entry Latency: 16 microseconds 00:08:21.067 Exit Latency: 4 microseconds 00:08:21.067 Relative Read Throughput: 0 00:08:21.067 Relative Read Latency: 0 00:08:21.067 Relative Write Throughput: 0 00:08:21.067 Relative Write Latency: 0 00:08:21.067 Idle Power: Not Reported 00:08:21.067 Active Power: Not Reported 00:08:21.067 Non-Operational Permissive Mode: Not Supported 00:08:21.067 00:08:21.067 Health Information 00:08:21.067 ================== 00:08:21.067 Critical Warnings: 00:08:21.067 Available Spare Space: OK 00:08:21.067 Temperature: OK 00:08:21.067 Device Reliability: OK 00:08:21.067 Read Only: No 00:08:21.067 Volatile Memory Backup: OK 00:08:21.067 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.067 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.067 Available Spare: 0% 00:08:21.067 Available Spare Threshold: 0% 00:08:21.067 Life Percentage Used: 0% 00:08:21.067 Data Units Read: 2216 00:08:21.067 Data Units Written: 2004 00:08:21.067 Host Read Commands: 113976 00:08:21.067 Host Write Commands: 112245 00:08:21.067 Controller Busy Time: 0 minutes 00:08:21.067 Power Cycles: 0 00:08:21.067 Power On Hours: 0 hours 00:08:21.067 Unsafe Shutdowns: 0 00:08:21.067 Unrecoverable Media Errors: 0 00:08:21.067 Lifetime Error Log Entries: 0 00:08:21.067 Warning Temperature Time: 0 minutes 00:08:21.067 Critical Temperature Time: 0 minutes 00:08:21.067 00:08:21.067 Number of Queues 00:08:21.067 ================ 00:08:21.068 Number of I/O Submission Queues: 64 00:08:21.068 Number of I/O Completion Queues: 64 00:08:21.068 00:08:21.068 ZNS Specific Controller Data 00:08:21.068 ============================ 00:08:21.068 Zone Append Size Limit: 0 00:08:21.068 00:08:21.068 00:08:21.068 Active Namespaces 00:08:21.068 ================= 00:08:21.068 Namespace ID:1 00:08:21.068 Error Recovery Timeout: Unlimited 00:08:21.068 Command Set Identifier: NVM (00h) 00:08:21.068 Deallocate: Supported 00:08:21.068 Deallocated/Unwritten Error: Supported 00:08:21.068 Deallocated Read Value: All 0x00 00:08:21.068 Deallocate in Write Zeroes: Not Supported 00:08:21.068 Deallocated Guard Field: 0xFFFF 00:08:21.068 Flush: Supported 00:08:21.068 Reservation: Not Supported 00:08:21.068 Namespace Sharing Capabilities: Private 00:08:21.068 Size (in LBAs): 1048576 (4GiB) 00:08:21.068 Capacity (in LBAs): 1048576 (4GiB) 00:08:21.068 Utilization (in LBAs): 1048576 (4GiB) 00:08:21.068 Thin Provisioning: Not Supported 00:08:21.068 Per-NS Atomic Units: No 00:08:21.068 Maximum Single Source Range Length: 128 00:08:21.068 Maximum Copy Length: 128 00:08:21.068 Maximum Source Range Count: 128 00:08:21.068 NGUID/EUI64 Never Reused: No 00:08:21.068 Namespace Write Protected: No 00:08:21.068 Number of LBA Formats: 8 00:08:21.068 Current LBA Format: LBA Format #04 00:08:21.068 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.068 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.068 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.068 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.068 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.068 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.068 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.068 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.068 00:08:21.068 NVM Specific Namespace Data 00:08:21.068 =========================== 00:08:21.068 Logical Block Storage Tag Mask: 0 00:08:21.068 Protection Information Capabilities: 00:08:21.068 16b Guard Protection Information Storage Tag Support: No 00:08:21.068 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.068 Storage Tag Check Read Support: No 00:08:21.068 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Namespace ID:2 00:08:21.068 Error Recovery Timeout: Unlimited 00:08:21.068 Command Set Identifier: NVM (00h) 00:08:21.068 Deallocate: Supported 00:08:21.068 Deallocated/Unwritten Error: Supported 00:08:21.068 Deallocated Read Value: All 0x00 00:08:21.068 Deallocate in Write Zeroes: Not Supported 00:08:21.068 Deallocated Guard Field: 0xFFFF 00:08:21.068 Flush: Supported 00:08:21.068 Reservation: Not Supported 00:08:21.068 Namespace Sharing Capabilities: Private 00:08:21.068 Size (in LBAs): 1048576 (4GiB) 00:08:21.068 Capacity (in LBAs): 1048576 (4GiB) 00:08:21.068 Utilization (in LBAs): 1048576 (4GiB) 00:08:21.068 Thin Provisioning: Not Supported 00:08:21.068 Per-NS Atomic Units: No 00:08:21.068 Maximum Single Source Range Length: 128 00:08:21.068 Maximum Copy Length: 128 00:08:21.068 Maximum Source Range Count: 128 00:08:21.068 NGUID/EUI64 Never Reused: No 00:08:21.068 Namespace Write Protected: No 00:08:21.068 Number of LBA Formats: 8 00:08:21.068 Current LBA Format: LBA Format #04 00:08:21.068 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.068 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.068 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.068 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.068 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.068 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.068 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.068 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.068 00:08:21.068 NVM Specific Namespace Data 00:08:21.068 =========================== 00:08:21.068 Logical Block Storage Tag Mask: 0 00:08:21.068 Protection Information Capabilities: 00:08:21.068 16b Guard Protection Information Storage Tag Support: No 00:08:21.068 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.068 Storage Tag Check Read Support: No 00:08:21.068 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Namespace ID:3 00:08:21.068 Error Recovery Timeout: Unlimited 00:08:21.068 Command Set Identifier: NVM (00h) 00:08:21.068 Deallocate: Supported 00:08:21.068 Deallocated/Unwritten Error: Supported 00:08:21.068 Deallocated Read Value: All 0x00 00:08:21.068 Deallocate in Write Zeroes: Not Supported 00:08:21.068 Deallocated Guard Field: 0xFFFF 00:08:21.068 Flush: Supported 00:08:21.068 Reservation: Not Supported 00:08:21.068 Namespace Sharing Capabilities: Private 00:08:21.068 Size (in LBAs): 1048576 (4GiB) 00:08:21.068 Capacity (in LBAs): 1048576 (4GiB) 00:08:21.068 Utilization (in LBAs): 1048576 (4GiB) 00:08:21.068 Thin Provisioning: Not Supported 00:08:21.068 Per-NS Atomic Units: No 00:08:21.068 Maximum Single Source Range Length: 128 00:08:21.068 Maximum Copy Length: 128 00:08:21.068 Maximum Source Range Count: 128 00:08:21.068 NGUID/EUI64 Never Reused: No 00:08:21.068 Namespace Write Protected: No 00:08:21.068 Number of LBA Formats: 8 00:08:21.068 Current LBA Format: LBA Format #04 00:08:21.068 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.068 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.068 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.068 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.068 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.068 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.068 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.068 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.068 00:08:21.068 NVM Specific Namespace Data 00:08:21.068 =========================== 00:08:21.068 Logical Block Storage Tag Mask: 0 00:08:21.068 Protection Information Capabilities: 00:08:21.068 16b Guard Protection Information Storage Tag Support: No 00:08:21.068 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.068 Storage Tag Check Read Support: No 00:08:21.068 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.068 10:14:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:21.068 10:14:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:21.331 ===================================================== 00:08:21.331 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:21.331 ===================================================== 00:08:21.331 Controller Capabilities/Features 00:08:21.331 ================================ 00:08:21.331 Vendor ID: 1b36 00:08:21.331 Subsystem Vendor ID: 1af4 00:08:21.331 Serial Number: 12343 00:08:21.331 Model Number: QEMU NVMe Ctrl 00:08:21.331 Firmware Version: 8.0.0 00:08:21.331 Recommended Arb Burst: 6 00:08:21.331 IEEE OUI Identifier: 00 54 52 00:08:21.331 Multi-path I/O 00:08:21.331 May have multiple subsystem ports: No 00:08:21.331 May have multiple controllers: Yes 00:08:21.331 Associated with SR-IOV VF: No 00:08:21.331 Max Data Transfer Size: 524288 00:08:21.331 Max Number of Namespaces: 256 00:08:21.331 Max Number of I/O Queues: 64 00:08:21.331 NVMe Specification Version (VS): 1.4 00:08:21.331 NVMe Specification Version (Identify): 1.4 00:08:21.331 Maximum Queue Entries: 2048 00:08:21.331 Contiguous Queues Required: Yes 00:08:21.331 Arbitration Mechanisms Supported 00:08:21.331 Weighted Round Robin: Not Supported 00:08:21.331 Vendor Specific: Not Supported 00:08:21.331 Reset Timeout: 7500 ms 00:08:21.331 Doorbell Stride: 4 bytes 00:08:21.331 NVM Subsystem Reset: Not Supported 00:08:21.331 Command Sets Supported 00:08:21.331 NVM Command Set: Supported 00:08:21.332 Boot Partition: Not Supported 00:08:21.332 Memory Page Size Minimum: 4096 bytes 00:08:21.332 Memory Page Size Maximum: 65536 bytes 00:08:21.332 Persistent Memory Region: Not Supported 00:08:21.332 Optional Asynchronous Events Supported 00:08:21.332 Namespace Attribute Notices: Supported 00:08:21.332 Firmware Activation Notices: Not Supported 00:08:21.332 ANA Change Notices: Not Supported 00:08:21.332 PLE Aggregate Log Change Notices: Not Supported 00:08:21.332 LBA Status Info Alert Notices: Not Supported 00:08:21.332 EGE Aggregate Log Change Notices: Not Supported 00:08:21.332 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.332 Zone Descriptor Change Notices: Not Supported 00:08:21.332 Discovery Log Change Notices: Not Supported 00:08:21.332 Controller Attributes 00:08:21.332 128-bit Host Identifier: Not Supported 00:08:21.332 Non-Operational Permissive Mode: Not Supported 00:08:21.332 NVM Sets: Not Supported 00:08:21.332 Read Recovery Levels: Not Supported 00:08:21.332 Endurance Groups: Supported 00:08:21.332 Predictable Latency Mode: Not Supported 00:08:21.332 Traffic Based Keep ALive: Not Supported 00:08:21.332 Namespace Granularity: Not Supported 00:08:21.332 SQ Associations: Not Supported 00:08:21.332 UUID List: Not Supported 00:08:21.332 Multi-Domain Subsystem: Not Supported 00:08:21.332 Fixed Capacity Management: Not Supported 00:08:21.332 Variable Capacity Management: Not Supported 00:08:21.332 Delete Endurance Group: Not Supported 00:08:21.332 Delete NVM Set: Not Supported 00:08:21.332 Extended LBA Formats Supported: Supported 00:08:21.332 Flexible Data Placement Supported: Supported 00:08:21.332 00:08:21.332 Controller Memory Buffer Support 00:08:21.332 ================================ 00:08:21.332 Supported: No 00:08:21.332 00:08:21.332 Persistent Memory Region Support 00:08:21.332 ================================ 00:08:21.332 Supported: No 00:08:21.332 00:08:21.332 Admin Command Set Attributes 00:08:21.332 ============================ 00:08:21.332 Security Send/Receive: Not Supported 00:08:21.332 Format NVM: Supported 00:08:21.332 Firmware Activate/Download: Not Supported 00:08:21.332 Namespace Management: Supported 00:08:21.332 Device Self-Test: Not Supported 00:08:21.332 Directives: Supported 00:08:21.332 NVMe-MI: Not Supported 00:08:21.332 Virtualization Management: Not Supported 00:08:21.332 Doorbell Buffer Config: Supported 00:08:21.332 Get LBA Status Capability: Not Supported 00:08:21.332 Command & Feature Lockdown Capability: Not Supported 00:08:21.332 Abort Command Limit: 4 00:08:21.332 Async Event Request Limit: 4 00:08:21.332 Number of Firmware Slots: N/A 00:08:21.332 Firmware Slot 1 Read-Only: N/A 00:08:21.332 Firmware Activation Without Reset: N/A 00:08:21.332 Multiple Update Detection Support: N/A 00:08:21.332 Firmware Update Granularity: No Information Provided 00:08:21.332 Per-Namespace SMART Log: Yes 00:08:21.332 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.332 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:21.332 Command Effects Log Page: Supported 00:08:21.332 Get Log Page Extended Data: Supported 00:08:21.332 Telemetry Log Pages: Not Supported 00:08:21.332 Persistent Event Log Pages: Not Supported 00:08:21.332 Supported Log Pages Log Page: May Support 00:08:21.332 Commands Supported & Effects Log Page: Not Supported 00:08:21.332 Feature Identifiers & Effects Log Page:May Support 00:08:21.332 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.332 Data Area 4 for Telemetry Log: Not Supported 00:08:21.332 Error Log Page Entries Supported: 1 00:08:21.332 Keep Alive: Not Supported 00:08:21.332 00:08:21.332 NVM Command Set Attributes 00:08:21.332 ========================== 00:08:21.332 Submission Queue Entry Size 00:08:21.332 Max: 64 00:08:21.332 Min: 64 00:08:21.332 Completion Queue Entry Size 00:08:21.332 Max: 16 00:08:21.332 Min: 16 00:08:21.332 Number of Namespaces: 256 00:08:21.332 Compare Command: Supported 00:08:21.332 Write Uncorrectable Command: Not Supported 00:08:21.332 Dataset Management Command: Supported 00:08:21.332 Write Zeroes Command: Supported 00:08:21.332 Set Features Save Field: Supported 00:08:21.332 Reservations: Not Supported 00:08:21.332 Timestamp: Supported 00:08:21.332 Copy: Supported 00:08:21.332 Volatile Write Cache: Present 00:08:21.332 Atomic Write Unit (Normal): 1 00:08:21.332 Atomic Write Unit (PFail): 1 00:08:21.332 Atomic Compare & Write Unit: 1 00:08:21.332 Fused Compare & Write: Not Supported 00:08:21.332 Scatter-Gather List 00:08:21.332 SGL Command Set: Supported 00:08:21.332 SGL Keyed: Not Supported 00:08:21.332 SGL Bit Bucket Descriptor: Not Supported 00:08:21.332 SGL Metadata Pointer: Not Supported 00:08:21.332 Oversized SGL: Not Supported 00:08:21.332 SGL Metadata Address: Not Supported 00:08:21.332 SGL Offset: Not Supported 00:08:21.332 Transport SGL Data Block: Not Supported 00:08:21.332 Replay Protected Memory Block: Not Supported 00:08:21.332 00:08:21.332 Firmware Slot Information 00:08:21.332 ========================= 00:08:21.332 Active slot: 1 00:08:21.332 Slot 1 Firmware Revision: 1.0 00:08:21.332 00:08:21.332 00:08:21.332 Commands Supported and Effects 00:08:21.332 ============================== 00:08:21.332 Admin Commands 00:08:21.332 -------------- 00:08:21.332 Delete I/O Submission Queue (00h): Supported 00:08:21.332 Create I/O Submission Queue (01h): Supported 00:08:21.332 Get Log Page (02h): Supported 00:08:21.332 Delete I/O Completion Queue (04h): Supported 00:08:21.332 Create I/O Completion Queue (05h): Supported 00:08:21.332 Identify (06h): Supported 00:08:21.332 Abort (08h): Supported 00:08:21.332 Set Features (09h): Supported 00:08:21.332 Get Features (0Ah): Supported 00:08:21.332 Asynchronous Event Request (0Ch): Supported 00:08:21.332 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.332 Directive Send (19h): Supported 00:08:21.332 Directive Receive (1Ah): Supported 00:08:21.332 Virtualization Management (1Ch): Supported 00:08:21.332 Doorbell Buffer Config (7Ch): Supported 00:08:21.332 Format NVM (80h): Supported LBA-Change 00:08:21.332 I/O Commands 00:08:21.332 ------------ 00:08:21.332 Flush (00h): Supported LBA-Change 00:08:21.332 Write (01h): Supported LBA-Change 00:08:21.332 Read (02h): Supported 00:08:21.332 Compare (05h): Supported 00:08:21.332 Write Zeroes (08h): Supported LBA-Change 00:08:21.332 Dataset Management (09h): Supported LBA-Change 00:08:21.332 Unknown (0Ch): Supported 00:08:21.332 Unknown (12h): Supported 00:08:21.332 Copy (19h): Supported LBA-Change 00:08:21.332 Unknown (1Dh): Supported LBA-Change 00:08:21.332 00:08:21.332 Error Log 00:08:21.332 ========= 00:08:21.332 00:08:21.332 Arbitration 00:08:21.332 =========== 00:08:21.332 Arbitration Burst: no limit 00:08:21.332 00:08:21.332 Power Management 00:08:21.332 ================ 00:08:21.332 Number of Power States: 1 00:08:21.332 Current Power State: Power State #0 00:08:21.332 Power State #0: 00:08:21.332 Max Power: 25.00 W 00:08:21.332 Non-Operational State: Operational 00:08:21.332 Entry Latency: 16 microseconds 00:08:21.332 Exit Latency: 4 microseconds 00:08:21.332 Relative Read Throughput: 0 00:08:21.332 Relative Read Latency: 0 00:08:21.332 Relative Write Throughput: 0 00:08:21.332 Relative Write Latency: 0 00:08:21.332 Idle Power: Not Reported 00:08:21.332 Active Power: Not Reported 00:08:21.332 Non-Operational Permissive Mode: Not Supported 00:08:21.332 00:08:21.332 Health Information 00:08:21.332 ================== 00:08:21.332 Critical Warnings: 00:08:21.332 Available Spare Space: OK 00:08:21.332 Temperature: OK 00:08:21.332 Device Reliability: OK 00:08:21.332 Read Only: No 00:08:21.332 Volatile Memory Backup: OK 00:08:21.332 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.332 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.332 Available Spare: 0% 00:08:21.332 Available Spare Threshold: 0% 00:08:21.332 Life Percentage Used: 0% 00:08:21.332 Data Units Read: 945 00:08:21.332 Data Units Written: 875 00:08:21.332 Host Read Commands: 39684 00:08:21.332 Host Write Commands: 39107 00:08:21.332 Controller Busy Time: 0 minutes 00:08:21.332 Power Cycles: 0 00:08:21.332 Power On Hours: 0 hours 00:08:21.332 Unsafe Shutdowns: 0 00:08:21.332 Unrecoverable Media Errors: 0 00:08:21.332 Lifetime Error Log Entries: 0 00:08:21.332 Warning Temperature Time: 0 minutes 00:08:21.332 Critical Temperature Time: 0 minutes 00:08:21.332 00:08:21.332 Number of Queues 00:08:21.332 ================ 00:08:21.332 Number of I/O Submission Queues: 64 00:08:21.333 Number of I/O Completion Queues: 64 00:08:21.333 00:08:21.333 ZNS Specific Controller Data 00:08:21.333 ============================ 00:08:21.333 Zone Append Size Limit: 0 00:08:21.333 00:08:21.333 00:08:21.333 Active Namespaces 00:08:21.333 ================= 00:08:21.333 Namespace ID:1 00:08:21.333 Error Recovery Timeout: Unlimited 00:08:21.333 Command Set Identifier: NVM (00h) 00:08:21.333 Deallocate: Supported 00:08:21.333 Deallocated/Unwritten Error: Supported 00:08:21.333 Deallocated Read Value: All 0x00 00:08:21.333 Deallocate in Write Zeroes: Not Supported 00:08:21.333 Deallocated Guard Field: 0xFFFF 00:08:21.333 Flush: Supported 00:08:21.333 Reservation: Not Supported 00:08:21.333 Namespace Sharing Capabilities: Multiple Controllers 00:08:21.333 Size (in LBAs): 262144 (1GiB) 00:08:21.333 Capacity (in LBAs): 262144 (1GiB) 00:08:21.333 Utilization (in LBAs): 262144 (1GiB) 00:08:21.333 Thin Provisioning: Not Supported 00:08:21.333 Per-NS Atomic Units: No 00:08:21.333 Maximum Single Source Range Length: 128 00:08:21.333 Maximum Copy Length: 128 00:08:21.333 Maximum Source Range Count: 128 00:08:21.333 NGUID/EUI64 Never Reused: No 00:08:21.333 Namespace Write Protected: No 00:08:21.333 Endurance group ID: 1 00:08:21.333 Number of LBA Formats: 8 00:08:21.333 Current LBA Format: LBA Format #04 00:08:21.333 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.333 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.333 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.333 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.333 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.333 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.333 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.333 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.333 00:08:21.333 Get Feature FDP: 00:08:21.333 ================ 00:08:21.333 Enabled: Yes 00:08:21.333 FDP configuration index: 0 00:08:21.333 00:08:21.333 FDP configurations log page 00:08:21.333 =========================== 00:08:21.333 Number of FDP configurations: 1 00:08:21.333 Version: 0 00:08:21.333 Size: 112 00:08:21.333 FDP Configuration Descriptor: 0 00:08:21.333 Descriptor Size: 96 00:08:21.333 Reclaim Group Identifier format: 2 00:08:21.333 FDP Volatile Write Cache: Not Present 00:08:21.333 FDP Configuration: Valid 00:08:21.333 Vendor Specific Size: 0 00:08:21.333 Number of Reclaim Groups: 2 00:08:21.333 Number of Recalim Unit Handles: 8 00:08:21.333 Max Placement Identifiers: 128 00:08:21.333 Number of Namespaces Suppprted: 256 00:08:21.333 Reclaim unit Nominal Size: 6000000 bytes 00:08:21.333 Estimated Reclaim Unit Time Limit: Not Reported 00:08:21.333 RUH Desc #000: RUH Type: Initially Isolated 00:08:21.333 RUH Desc #001: RUH Type: Initially Isolated 00:08:21.333 RUH Desc #002: RUH Type: Initially Isolated 00:08:21.333 RUH Desc #003: RUH Type: Initially Isolated 00:08:21.333 RUH Desc #004: RUH Type: Initially Isolated 00:08:21.333 RUH Desc #005: RUH Type: Initially Isolated 00:08:21.333 RUH Desc #006: RUH Type: Initially Isolated 00:08:21.333 RUH Desc #007: RUH Type: Initially Isolated 00:08:21.333 00:08:21.333 FDP reclaim unit handle usage log page 00:08:21.333 ====================================== 00:08:21.333 Number of Reclaim Unit Handles: 8 00:08:21.333 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:21.333 RUH Usage Desc #001: RUH Attributes: Unused 00:08:21.333 RUH Usage Desc #002: RUH Attributes: Unused 00:08:21.333 RUH Usage Desc #003: RUH Attributes: Unused 00:08:21.333 RUH Usage Desc #004: RUH Attributes: Unused 00:08:21.333 RUH Usage Desc #005: RUH Attributes: Unused 00:08:21.333 RUH Usage Desc #006: RUH Attributes: Unused 00:08:21.333 RUH Usage Desc #007: RUH Attributes: Unused 00:08:21.333 00:08:21.333 FDP statistics log page 00:08:21.333 ======================= 00:08:21.333 Host bytes with metadata written: 556048384 00:08:21.333 Media bytes with metadata written: 556126208 00:08:21.333 Media bytes erased: 0 00:08:21.333 00:08:21.333 FDP events log page 00:08:21.333 =================== 00:08:21.333 Number of FDP events: 0 00:08:21.333 00:08:21.333 NVM Specific Namespace Data 00:08:21.333 =========================== 00:08:21.333 Logical Block Storage Tag Mask: 0 00:08:21.333 Protection Information Capabilities: 00:08:21.333 16b Guard Protection Information Storage Tag Support: No 00:08:21.333 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.333 Storage Tag Check Read Support: No 00:08:21.333 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.333 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.333 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.333 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.333 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.333 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.333 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.333 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.333 00:08:21.333 real 0m1.036s 00:08:21.333 user 0m0.390s 00:08:21.333 sys 0m0.428s 00:08:21.333 10:14:00 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.333 10:14:00 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:21.333 ************************************ 00:08:21.333 END TEST nvme_identify 00:08:21.333 ************************************ 00:08:21.333 10:14:00 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:21.333 10:14:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.333 10:14:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.333 10:14:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.333 ************************************ 00:08:21.333 START TEST nvme_perf 00:08:21.333 ************************************ 00:08:21.333 10:14:00 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:21.333 10:14:00 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:22.720 Initializing NVMe Controllers 00:08:22.720 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:22.720 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:22.720 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:22.720 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:22.720 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:22.720 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:22.720 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:22.720 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:22.720 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:22.720 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:22.720 Initialization complete. Launching workers. 00:08:22.720 ======================================================== 00:08:22.720 Latency(us) 00:08:22.720 Device Information : IOPS MiB/s Average min max 00:08:22.720 PCIE (0000:00:13.0) NSID 1 from core 0: 11040.78 129.38 11597.34 4493.13 32699.33 00:08:22.720 PCIE (0000:00:10.0) NSID 1 from core 0: 11040.78 129.38 11587.51 4123.70 32529.30 00:08:22.720 PCIE (0000:00:11.0) NSID 1 from core 0: 11040.78 129.38 11578.36 3934.66 32150.05 00:08:22.720 PCIE (0000:00:12.0) NSID 1 from core 0: 11040.78 129.38 11568.13 3439.10 33057.77 00:08:22.720 PCIE (0000:00:12.0) NSID 2 from core 0: 11040.78 129.38 11558.03 3210.95 32676.53 00:08:22.720 PCIE (0000:00:12.0) NSID 3 from core 0: 11104.60 130.13 11481.69 2968.26 25995.68 00:08:22.720 ======================================================== 00:08:22.720 Total : 66308.48 777.05 11561.76 2968.26 33057.77 00:08:22.720 00:08:22.720 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:22.720 ================================================================================= 00:08:22.720 1.00000% : 5772.209us 00:08:22.720 10.00000% : 6276.332us 00:08:22.720 25.00000% : 7007.311us 00:08:22.720 50.00000% : 11645.243us 00:08:22.720 75.00000% : 14518.745us 00:08:22.720 90.00000% : 16636.062us 00:08:22.720 95.00000% : 17845.957us 00:08:22.720 98.00000% : 19660.800us 00:08:22.720 99.00000% : 22685.538us 00:08:22.720 99.50000% : 31457.280us 00:08:22.720 99.90000% : 32667.175us 00:08:22.720 99.99000% : 32667.175us 00:08:22.720 99.99900% : 32868.825us 00:08:22.720 99.99990% : 32868.825us 00:08:22.720 99.99999% : 32868.825us 00:08:22.720 00:08:22.720 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:22.720 ================================================================================= 00:08:22.720 1.00000% : 5671.385us 00:08:22.720 10.00000% : 6276.332us 00:08:22.720 25.00000% : 7007.311us 00:08:22.720 50.00000% : 11645.243us 00:08:22.720 75.00000% : 14518.745us 00:08:22.720 90.00000% : 16636.062us 00:08:22.720 95.00000% : 17745.132us 00:08:22.720 98.00000% : 19660.800us 00:08:22.720 99.00000% : 22786.363us 00:08:22.720 99.50000% : 31457.280us 00:08:22.720 99.90000% : 32263.877us 00:08:22.720 99.99000% : 32667.175us 00:08:22.720 99.99900% : 32667.175us 00:08:22.720 99.99990% : 32667.175us 00:08:22.720 99.99999% : 32667.175us 00:08:22.720 00:08:22.720 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:22.720 ================================================================================= 00:08:22.720 1.00000% : 5772.209us 00:08:22.720 10.00000% : 6276.332us 00:08:22.720 25.00000% : 7007.311us 00:08:22.720 50.00000% : 11393.182us 00:08:22.720 75.00000% : 14518.745us 00:08:22.720 90.00000% : 16636.062us 00:08:22.720 95.00000% : 17644.308us 00:08:22.720 98.00000% : 19761.625us 00:08:22.720 99.00000% : 22887.188us 00:08:22.720 99.50000% : 31255.631us 00:08:22.720 99.90000% : 32062.228us 00:08:22.720 99.99000% : 32263.877us 00:08:22.720 99.99900% : 32263.877us 00:08:22.720 99.99990% : 32263.877us 00:08:22.720 99.99999% : 32263.877us 00:08:22.720 00:08:22.720 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:22.720 ================================================================================= 00:08:22.720 1.00000% : 5595.766us 00:08:22.720 10.00000% : 6276.332us 00:08:22.720 25.00000% : 7007.311us 00:08:22.721 50.00000% : 11443.594us 00:08:22.721 75.00000% : 14619.569us 00:08:22.721 90.00000% : 16535.237us 00:08:22.721 95.00000% : 17644.308us 00:08:22.721 98.00000% : 20064.098us 00:08:22.721 99.00000% : 23895.434us 00:08:22.721 99.50000% : 32062.228us 00:08:22.721 99.90000% : 32868.825us 00:08:22.721 99.99000% : 33070.474us 00:08:22.721 99.99900% : 33070.474us 00:08:22.721 99.99990% : 33070.474us 00:08:22.721 99.99999% : 33070.474us 00:08:22.721 00:08:22.721 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:22.721 ================================================================================= 00:08:22.721 1.00000% : 5394.117us 00:08:22.721 10.00000% : 6276.332us 00:08:22.721 25.00000% : 6956.898us 00:08:22.721 50.00000% : 11594.831us 00:08:22.721 75.00000% : 14720.394us 00:08:22.721 90.00000% : 16535.237us 00:08:22.721 95.00000% : 17946.782us 00:08:22.721 98.00000% : 19862.449us 00:08:22.721 99.00000% : 24097.083us 00:08:22.721 99.50000% : 31658.929us 00:08:22.721 99.90000% : 32667.175us 00:08:22.721 99.99000% : 32667.175us 00:08:22.721 99.99900% : 32868.825us 00:08:22.721 99.99990% : 32868.825us 00:08:22.721 99.99999% : 32868.825us 00:08:22.721 00:08:22.721 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:22.721 ================================================================================= 00:08:22.721 1.00000% : 5167.262us 00:08:22.721 10.00000% : 6251.126us 00:08:22.721 25.00000% : 6956.898us 00:08:22.721 50.00000% : 11594.831us 00:08:22.721 75.00000% : 14619.569us 00:08:22.721 90.00000% : 16434.412us 00:08:22.721 95.00000% : 17543.483us 00:08:22.721 98.00000% : 19358.326us 00:08:22.721 99.00000% : 20164.923us 00:08:22.721 99.50000% : 24601.206us 00:08:22.721 99.90000% : 25710.277us 00:08:22.721 99.99000% : 26012.751us 00:08:22.721 99.99900% : 26012.751us 00:08:22.721 99.99990% : 26012.751us 00:08:22.721 99.99999% : 26012.751us 00:08:22.721 00:08:22.721 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:22.721 ============================================================================== 00:08:22.721 Range in us Cumulative IO count 00:08:22.721 4486.695 - 4511.902: 0.0271% ( 3) 00:08:22.721 4511.902 - 4537.108: 0.1084% ( 9) 00:08:22.721 4537.108 - 4562.314: 0.1355% ( 3) 00:08:22.721 4562.314 - 4587.520: 0.1535% ( 2) 00:08:22.721 4587.520 - 4612.726: 0.1716% ( 2) 00:08:22.721 4612.726 - 4637.932: 0.1897% ( 2) 00:08:22.721 4637.932 - 4663.138: 0.1987% ( 1) 00:08:22.721 4663.138 - 4688.345: 0.2168% ( 2) 00:08:22.721 4688.345 - 4713.551: 0.2439% ( 3) 00:08:22.721 4713.551 - 4738.757: 0.2619% ( 2) 00:08:22.721 4738.757 - 4763.963: 0.2890% ( 3) 00:08:22.721 4763.963 - 4789.169: 0.3071% ( 2) 00:08:22.721 4789.169 - 4814.375: 0.3342% ( 3) 00:08:22.721 4814.375 - 4839.582: 0.3522% ( 2) 00:08:22.721 4839.582 - 4864.788: 0.3793% ( 3) 00:08:22.721 4864.788 - 4889.994: 0.3974% ( 2) 00:08:22.721 4889.994 - 4915.200: 0.4245% ( 3) 00:08:22.721 4915.200 - 4940.406: 0.4426% ( 2) 00:08:22.721 4940.406 - 4965.612: 0.4697% ( 3) 00:08:22.721 4965.612 - 4990.818: 0.4967% ( 3) 00:08:22.721 4990.818 - 5016.025: 0.5148% ( 2) 00:08:22.721 5016.025 - 5041.231: 0.5329% ( 2) 00:08:22.721 5041.231 - 5066.437: 0.5600% ( 3) 00:08:22.721 5066.437 - 5091.643: 0.5780% ( 2) 00:08:22.721 5671.385 - 5696.591: 0.5961% ( 2) 00:08:22.721 5696.591 - 5721.797: 0.6503% ( 6) 00:08:22.721 5721.797 - 5747.003: 0.8129% ( 18) 00:08:22.721 5747.003 - 5772.209: 1.0477% ( 26) 00:08:22.721 5772.209 - 5797.415: 1.3999% ( 39) 00:08:22.721 5797.415 - 5822.622: 1.6799% ( 31) 00:08:22.721 5822.622 - 5847.828: 1.8876% ( 23) 00:08:22.721 5847.828 - 5873.034: 2.1496% ( 29) 00:08:22.721 5873.034 - 5898.240: 2.5560% ( 45) 00:08:22.721 5898.240 - 5923.446: 3.0166% ( 51) 00:08:22.721 5923.446 - 5948.652: 3.5043% ( 54) 00:08:22.721 5948.652 - 5973.858: 4.0462% ( 60) 00:08:22.721 5973.858 - 5999.065: 4.5430% ( 55) 00:08:22.721 5999.065 - 6024.271: 5.1030% ( 62) 00:08:22.721 6024.271 - 6049.477: 5.6087% ( 56) 00:08:22.721 6049.477 - 6074.683: 6.0513% ( 49) 00:08:22.721 6074.683 - 6099.889: 6.4668% ( 46) 00:08:22.721 6099.889 - 6125.095: 6.9003% ( 48) 00:08:22.721 6125.095 - 6150.302: 7.3609% ( 51) 00:08:22.721 6150.302 - 6175.508: 7.8306% ( 52) 00:08:22.721 6175.508 - 6200.714: 8.3454% ( 57) 00:08:22.721 6200.714 - 6225.920: 8.8602% ( 57) 00:08:22.721 6225.920 - 6251.126: 9.4111% ( 61) 00:08:22.721 6251.126 - 6276.332: 10.1066% ( 77) 00:08:22.721 6276.332 - 6301.538: 10.9194% ( 90) 00:08:22.721 6301.538 - 6326.745: 11.6691% ( 83) 00:08:22.721 6326.745 - 6351.951: 12.3555% ( 76) 00:08:22.721 6351.951 - 6377.157: 13.1593% ( 89) 00:08:22.721 6377.157 - 6402.363: 13.9270% ( 85) 00:08:22.721 6402.363 - 6427.569: 14.6767% ( 83) 00:08:22.721 6427.569 - 6452.775: 15.4082% ( 81) 00:08:22.721 6452.775 - 6503.188: 16.7811% ( 152) 00:08:22.721 6503.188 - 6553.600: 18.1087% ( 147) 00:08:22.721 6553.600 - 6604.012: 19.4184% ( 145) 00:08:22.721 6604.012 - 6654.425: 20.4118% ( 110) 00:08:22.721 6654.425 - 6704.837: 21.1976% ( 87) 00:08:22.721 6704.837 - 6755.249: 21.9924% ( 88) 00:08:22.721 6755.249 - 6805.662: 22.6698% ( 75) 00:08:22.721 6805.662 - 6856.074: 23.3111% ( 71) 00:08:22.721 6856.074 - 6906.486: 24.0065% ( 77) 00:08:22.721 6906.486 - 6956.898: 24.7561% ( 83) 00:08:22.721 6956.898 - 7007.311: 25.4426% ( 76) 00:08:22.721 7007.311 - 7057.723: 26.1109% ( 74) 00:08:22.721 7057.723 - 7108.135: 26.7973% ( 76) 00:08:22.721 7108.135 - 7158.548: 27.3392% ( 60) 00:08:22.721 7158.548 - 7208.960: 27.6824% ( 38) 00:08:22.721 7208.960 - 7259.372: 27.9263% ( 27) 00:08:22.721 7259.372 - 7309.785: 28.1160% ( 21) 00:08:22.721 7309.785 - 7360.197: 28.2514% ( 15) 00:08:22.721 7360.197 - 7410.609: 28.3779% ( 14) 00:08:22.721 7410.609 - 7461.022: 28.5134% ( 15) 00:08:22.721 7461.022 - 7511.434: 28.6488% ( 15) 00:08:22.721 7511.434 - 7561.846: 28.7572% ( 12) 00:08:22.721 7561.846 - 7612.258: 28.8656% ( 12) 00:08:22.721 7612.258 - 7662.671: 28.9379% ( 8) 00:08:22.721 7662.671 - 7713.083: 29.0372% ( 11) 00:08:22.721 7713.083 - 7763.495: 29.1546% ( 13) 00:08:22.721 7763.495 - 7813.908: 29.3533% ( 22) 00:08:22.721 7813.908 - 7864.320: 29.4707% ( 13) 00:08:22.721 7864.320 - 7914.732: 29.5701% ( 11) 00:08:22.721 7914.732 - 7965.145: 29.6694% ( 11) 00:08:22.721 7965.145 - 8015.557: 29.7959% ( 14) 00:08:22.721 8015.557 - 8065.969: 29.9133% ( 13) 00:08:22.721 8065.969 - 8116.382: 30.0217% ( 12) 00:08:22.721 8116.382 - 8166.794: 30.1210% ( 11) 00:08:22.721 8166.794 - 8217.206: 30.2294% ( 12) 00:08:22.721 8217.206 - 8267.618: 30.3378% ( 12) 00:08:22.721 8267.618 - 8318.031: 30.4642% ( 14) 00:08:22.721 8318.031 - 8368.443: 30.5455% ( 9) 00:08:22.721 8368.443 - 8418.855: 30.6268% ( 9) 00:08:22.721 8418.855 - 8469.268: 30.7533% ( 14) 00:08:22.721 8469.268 - 8519.680: 30.8707% ( 13) 00:08:22.721 8519.680 - 8570.092: 31.0242% ( 17) 00:08:22.721 8570.092 - 8620.505: 31.0965% ( 8) 00:08:22.721 8620.505 - 8670.917: 31.1777% ( 9) 00:08:22.721 8670.917 - 8721.329: 31.2500% ( 8) 00:08:22.721 8721.329 - 8771.742: 31.3223% ( 8) 00:08:22.721 8771.742 - 8822.154: 31.4397% ( 13) 00:08:22.721 8822.154 - 8872.566: 31.6022% ( 18) 00:08:22.721 8872.566 - 8922.978: 31.8100% ( 23) 00:08:22.721 8922.978 - 8973.391: 31.9635% ( 17) 00:08:22.721 8973.391 - 9023.803: 32.1351% ( 19) 00:08:22.721 9023.803 - 9074.215: 32.3248% ( 21) 00:08:22.721 9074.215 - 9124.628: 32.5145% ( 21) 00:08:22.721 9124.628 - 9175.040: 32.7041% ( 21) 00:08:22.721 9175.040 - 9225.452: 32.8757% ( 19) 00:08:22.721 9225.452 - 9275.865: 33.0564% ( 20) 00:08:22.721 9275.865 - 9326.277: 33.2099% ( 17) 00:08:22.721 9326.277 - 9376.689: 33.3363% ( 14) 00:08:22.721 9376.689 - 9427.102: 33.4718% ( 15) 00:08:22.721 9427.102 - 9477.514: 33.5892% ( 13) 00:08:22.721 9477.514 - 9527.926: 33.6976% ( 12) 00:08:22.721 9527.926 - 9578.338: 33.7879% ( 10) 00:08:22.721 9578.338 - 9628.751: 33.8692% ( 9) 00:08:22.721 9628.751 - 9679.163: 33.9776% ( 12) 00:08:22.721 9679.163 - 9729.575: 34.0950% ( 13) 00:08:22.721 9729.575 - 9779.988: 34.1763% ( 9) 00:08:22.721 9779.988 - 9830.400: 34.2666% ( 10) 00:08:22.721 9830.400 - 9880.812: 34.3479% ( 9) 00:08:22.721 9880.812 - 9931.225: 34.4292% ( 9) 00:08:22.721 9931.225 - 9981.637: 34.4924% ( 7) 00:08:22.721 9981.637 - 10032.049: 34.6098% ( 13) 00:08:22.721 10032.049 - 10082.462: 34.7272% ( 13) 00:08:22.721 10082.462 - 10132.874: 34.8537% ( 14) 00:08:22.721 10132.874 - 10183.286: 35.0163% ( 18) 00:08:22.721 10183.286 - 10233.698: 35.2691% ( 28) 00:08:22.721 10233.698 - 10284.111: 35.6304% ( 40) 00:08:22.721 10284.111 - 10334.523: 36.0098% ( 42) 00:08:22.721 10334.523 - 10384.935: 36.4794% ( 52) 00:08:22.721 10384.935 - 10435.348: 37.0213% ( 60) 00:08:22.721 10435.348 - 10485.760: 37.5903% ( 63) 00:08:22.721 10485.760 - 10536.172: 38.1142% ( 58) 00:08:22.721 10536.172 - 10586.585: 38.6290% ( 57) 00:08:22.721 10586.585 - 10636.997: 39.1980% ( 63) 00:08:22.721 10636.997 - 10687.409: 39.7670% ( 63) 00:08:22.721 10687.409 - 10737.822: 40.3179% ( 61) 00:08:22.721 10737.822 - 10788.234: 40.9230% ( 67) 00:08:22.721 10788.234 - 10838.646: 41.5462% ( 69) 00:08:22.721 10838.646 - 10889.058: 42.1243% ( 64) 00:08:22.721 10889.058 - 10939.471: 42.7836% ( 73) 00:08:22.721 10939.471 - 10989.883: 43.4339% ( 72) 00:08:22.721 10989.883 - 11040.295: 44.1022% ( 74) 00:08:22.721 11040.295 - 11090.708: 44.8158% ( 79) 00:08:22.721 11090.708 - 11141.120: 45.5202% ( 78) 00:08:22.721 11141.120 - 11191.532: 46.1976% ( 75) 00:08:22.722 11191.532 - 11241.945: 46.8569% ( 73) 00:08:22.722 11241.945 - 11292.357: 47.5253% ( 74) 00:08:22.722 11292.357 - 11342.769: 48.0762% ( 61) 00:08:22.722 11342.769 - 11393.182: 48.4556% ( 42) 00:08:22.722 11393.182 - 11443.594: 48.8710% ( 46) 00:08:22.722 11443.594 - 11494.006: 49.2504% ( 42) 00:08:22.722 11494.006 - 11544.418: 49.5574% ( 34) 00:08:22.722 11544.418 - 11594.831: 49.8736% ( 35) 00:08:22.722 11594.831 - 11645.243: 50.1445% ( 30) 00:08:22.722 11645.243 - 11695.655: 50.3613% ( 24) 00:08:22.722 11695.655 - 11746.068: 50.6232% ( 29) 00:08:22.722 11746.068 - 11796.480: 50.9122% ( 32) 00:08:22.722 11796.480 - 11846.892: 51.1651% ( 28) 00:08:22.722 11846.892 - 11897.305: 51.3728% ( 23) 00:08:22.722 11897.305 - 11947.717: 51.5986% ( 25) 00:08:22.722 11947.717 - 11998.129: 51.8335% ( 26) 00:08:22.722 11998.129 - 12048.542: 52.0773% ( 27) 00:08:22.722 12048.542 - 12098.954: 52.3302% ( 28) 00:08:22.722 12098.954 - 12149.366: 52.6012% ( 30) 00:08:22.722 12149.366 - 12199.778: 52.8902% ( 32) 00:08:22.722 12199.778 - 12250.191: 53.1973% ( 34) 00:08:22.722 12250.191 - 12300.603: 53.4863% ( 32) 00:08:22.722 12300.603 - 12351.015: 53.8114% ( 36) 00:08:22.722 12351.015 - 12401.428: 54.1366% ( 36) 00:08:22.722 12401.428 - 12451.840: 54.4888% ( 39) 00:08:22.722 12451.840 - 12502.252: 54.8410% ( 39) 00:08:22.722 12502.252 - 12552.665: 55.1933% ( 39) 00:08:22.722 12552.665 - 12603.077: 55.5184% ( 36) 00:08:22.722 12603.077 - 12653.489: 55.8797% ( 40) 00:08:22.722 12653.489 - 12703.902: 56.2229% ( 38) 00:08:22.722 12703.902 - 12754.314: 56.5119% ( 32) 00:08:22.722 12754.314 - 12804.726: 56.8280% ( 35) 00:08:22.722 12804.726 - 12855.138: 57.0900% ( 29) 00:08:22.722 12855.138 - 12905.551: 57.3067% ( 24) 00:08:22.722 12905.551 - 13006.375: 57.7764% ( 52) 00:08:22.722 13006.375 - 13107.200: 58.2912% ( 57) 00:08:22.722 13107.200 - 13208.025: 58.9595% ( 74) 00:08:22.722 13208.025 - 13308.849: 59.6279% ( 74) 00:08:22.722 13308.849 - 13409.674: 60.4046% ( 86) 00:08:22.722 13409.674 - 13510.498: 61.3891% ( 109) 00:08:22.722 13510.498 - 13611.323: 62.4187% ( 114) 00:08:22.722 13611.323 - 13712.148: 63.7735% ( 150) 00:08:22.722 13712.148 - 13812.972: 65.2999% ( 169) 00:08:22.722 13812.972 - 13913.797: 66.8533% ( 172) 00:08:22.722 13913.797 - 14014.622: 68.5242% ( 185) 00:08:22.722 14014.622 - 14115.446: 70.2041% ( 186) 00:08:22.722 14115.446 - 14216.271: 71.8389% ( 181) 00:08:22.722 14216.271 - 14317.095: 73.1575% ( 146) 00:08:22.722 14317.095 - 14417.920: 74.3858% ( 136) 00:08:22.722 14417.920 - 14518.745: 75.4787% ( 121) 00:08:22.722 14518.745 - 14619.569: 76.5264% ( 116) 00:08:22.722 14619.569 - 14720.394: 77.3663% ( 93) 00:08:22.722 14720.394 - 14821.218: 78.0889% ( 80) 00:08:22.722 14821.218 - 14922.043: 78.9288% ( 93) 00:08:22.722 14922.043 - 15022.868: 79.6152% ( 76) 00:08:22.722 15022.868 - 15123.692: 80.3378% ( 80) 00:08:22.722 15123.692 - 15224.517: 81.1326% ( 88) 00:08:22.722 15224.517 - 15325.342: 81.8913% ( 84) 00:08:22.722 15325.342 - 15426.166: 82.7132% ( 91) 00:08:22.722 15426.166 - 15526.991: 83.4357% ( 80) 00:08:22.722 15526.991 - 15627.815: 84.1944% ( 84) 00:08:22.722 15627.815 - 15728.640: 84.9350% ( 82) 00:08:22.722 15728.640 - 15829.465: 85.5853% ( 72) 00:08:22.722 15829.465 - 15930.289: 86.2897% ( 78) 00:08:22.722 15930.289 - 16031.114: 86.9581% ( 74) 00:08:22.722 16031.114 - 16131.938: 87.5000% ( 60) 00:08:22.722 16131.938 - 16232.763: 88.0058% ( 56) 00:08:22.722 16232.763 - 16333.588: 88.6290% ( 69) 00:08:22.722 16333.588 - 16434.412: 89.1799% ( 61) 00:08:22.722 16434.412 - 16535.237: 89.7760% ( 66) 00:08:22.722 16535.237 - 16636.062: 90.3540% ( 64) 00:08:22.722 16636.062 - 16736.886: 90.9140% ( 62) 00:08:22.722 16736.886 - 16837.711: 91.3656% ( 50) 00:08:22.722 16837.711 - 16938.535: 91.7901% ( 47) 00:08:22.722 16938.535 - 17039.360: 92.2507% ( 51) 00:08:22.722 17039.360 - 17140.185: 92.7655% ( 57) 00:08:22.722 17140.185 - 17241.009: 93.2081% ( 49) 00:08:22.722 17241.009 - 17341.834: 93.5694% ( 40) 00:08:22.722 17341.834 - 17442.658: 93.9216% ( 39) 00:08:22.722 17442.658 - 17543.483: 94.2287% ( 34) 00:08:22.722 17543.483 - 17644.308: 94.5990% ( 41) 00:08:22.722 17644.308 - 17745.132: 94.9422% ( 38) 00:08:22.722 17745.132 - 17845.957: 95.1951% ( 28) 00:08:22.722 17845.957 - 17946.782: 95.5022% ( 34) 00:08:22.722 17946.782 - 18047.606: 95.7009% ( 22) 00:08:22.722 18047.606 - 18148.431: 95.9086% ( 23) 00:08:22.722 18148.431 - 18249.255: 96.1073% ( 22) 00:08:22.722 18249.255 - 18350.080: 96.2518% ( 16) 00:08:22.722 18350.080 - 18450.905: 96.4144% ( 18) 00:08:22.722 18450.905 - 18551.729: 96.5589% ( 16) 00:08:22.722 18551.729 - 18652.554: 96.6944% ( 15) 00:08:22.722 18652.554 - 18753.378: 96.8660% ( 19) 00:08:22.722 18753.378 - 18854.203: 96.9924% ( 14) 00:08:22.722 18854.203 - 18955.028: 97.1369% ( 16) 00:08:22.722 18955.028 - 19055.852: 97.2814% ( 16) 00:08:22.722 19055.852 - 19156.677: 97.4079% ( 14) 00:08:22.722 19156.677 - 19257.502: 97.5163% ( 12) 00:08:22.722 19257.502 - 19358.326: 97.6337% ( 13) 00:08:22.722 19358.326 - 19459.151: 97.7601% ( 14) 00:08:22.722 19459.151 - 19559.975: 97.8775% ( 13) 00:08:22.722 19559.975 - 19660.800: 98.0220% ( 16) 00:08:22.722 19660.800 - 19761.625: 98.1395% ( 13) 00:08:22.722 19761.625 - 19862.449: 98.2388% ( 11) 00:08:22.722 19862.449 - 19963.274: 98.3020% ( 7) 00:08:22.722 19963.274 - 20064.098: 98.3382% ( 4) 00:08:22.722 20064.098 - 20164.923: 98.4014% ( 7) 00:08:22.722 20164.923 - 20265.748: 98.4556% ( 6) 00:08:22.722 20265.748 - 20366.572: 98.5188% ( 7) 00:08:22.722 20366.572 - 20467.397: 98.5730% ( 6) 00:08:22.722 20467.397 - 20568.222: 98.6272% ( 6) 00:08:22.722 20568.222 - 20669.046: 98.6633% ( 4) 00:08:22.722 20669.046 - 20769.871: 98.7175% ( 6) 00:08:22.722 20769.871 - 20870.695: 98.7626% ( 5) 00:08:22.722 20870.695 - 20971.520: 98.7988% ( 4) 00:08:22.722 20971.520 - 21072.345: 98.8168% ( 2) 00:08:22.722 21072.345 - 21173.169: 98.8349% ( 2) 00:08:22.722 21173.169 - 21273.994: 98.8439% ( 1) 00:08:22.722 22282.240 - 22383.065: 98.8620% ( 2) 00:08:22.722 22383.065 - 22483.889: 98.9252% ( 7) 00:08:22.722 22483.889 - 22584.714: 98.9884% ( 7) 00:08:22.722 22584.714 - 22685.538: 99.0517% ( 7) 00:08:22.722 22685.538 - 22786.363: 99.1239% ( 8) 00:08:22.722 22786.363 - 22887.188: 99.1781% ( 6) 00:08:22.722 22887.188 - 22988.012: 99.2413% ( 7) 00:08:22.722 22988.012 - 23088.837: 99.3046% ( 7) 00:08:22.722 23088.837 - 23189.662: 99.3678% ( 7) 00:08:22.722 23189.662 - 23290.486: 99.4129% ( 5) 00:08:22.722 23290.486 - 23391.311: 99.4220% ( 1) 00:08:22.722 31053.982 - 31255.631: 99.4852% ( 7) 00:08:22.722 31255.631 - 31457.280: 99.5574% ( 8) 00:08:22.722 31457.280 - 31658.929: 99.6116% ( 6) 00:08:22.722 31658.929 - 31860.578: 99.6749% ( 7) 00:08:22.722 31860.578 - 32062.228: 99.7471% ( 8) 00:08:22.722 32062.228 - 32263.877: 99.8103% ( 7) 00:08:22.722 32263.877 - 32465.526: 99.8916% ( 9) 00:08:22.722 32465.526 - 32667.175: 99.9910% ( 11) 00:08:22.722 32667.175 - 32868.825: 100.0000% ( 1) 00:08:22.722 00:08:22.722 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:22.722 ============================================================================== 00:08:22.722 Range in us Cumulative IO count 00:08:22.722 4108.603 - 4133.809: 0.0090% ( 1) 00:08:22.722 4133.809 - 4159.015: 0.0181% ( 1) 00:08:22.722 4159.015 - 4184.222: 0.0542% ( 4) 00:08:22.722 4184.222 - 4209.428: 0.0993% ( 5) 00:08:22.722 4209.428 - 4234.634: 0.1084% ( 1) 00:08:22.722 4234.634 - 4259.840: 0.1174% ( 1) 00:08:22.722 4259.840 - 4285.046: 0.1355% ( 2) 00:08:22.722 4285.046 - 4310.252: 0.1535% ( 2) 00:08:22.722 4310.252 - 4335.458: 0.1806% ( 3) 00:08:22.722 4335.458 - 4360.665: 0.1897% ( 1) 00:08:22.722 4360.665 - 4385.871: 0.2168% ( 3) 00:08:22.722 4385.871 - 4411.077: 0.2258% ( 1) 00:08:22.722 4411.077 - 4436.283: 0.2529% ( 3) 00:08:22.722 4436.283 - 4461.489: 0.2710% ( 2) 00:08:22.722 4461.489 - 4486.695: 0.2890% ( 2) 00:08:22.722 4486.695 - 4511.902: 0.2980% ( 1) 00:08:22.722 4511.902 - 4537.108: 0.3342% ( 4) 00:08:22.722 4537.108 - 4562.314: 0.3432% ( 1) 00:08:22.722 4562.314 - 4587.520: 0.3703% ( 3) 00:08:22.722 4587.520 - 4612.726: 0.3884% ( 2) 00:08:22.722 4612.726 - 4637.932: 0.3974% ( 1) 00:08:22.722 4637.932 - 4663.138: 0.4245% ( 3) 00:08:22.722 4663.138 - 4688.345: 0.4516% ( 3) 00:08:22.722 4688.345 - 4713.551: 0.4606% ( 1) 00:08:22.722 4713.551 - 4738.757: 0.4877% ( 3) 00:08:22.722 4738.757 - 4763.963: 0.4967% ( 1) 00:08:22.722 4763.963 - 4789.169: 0.5148% ( 2) 00:08:22.722 4789.169 - 4814.375: 0.5329% ( 2) 00:08:22.722 4814.375 - 4839.582: 0.5509% ( 2) 00:08:22.722 4839.582 - 4864.788: 0.5780% ( 3) 00:08:22.722 5570.560 - 5595.766: 0.6413% ( 7) 00:08:22.722 5595.766 - 5620.972: 0.7406% ( 11) 00:08:22.722 5620.972 - 5646.178: 0.8671% ( 14) 00:08:22.722 5646.178 - 5671.385: 1.0206% ( 17) 00:08:22.722 5671.385 - 5696.591: 1.2193% ( 22) 00:08:22.722 5696.591 - 5721.797: 1.4722% ( 28) 00:08:22.722 5721.797 - 5747.003: 1.7251% ( 28) 00:08:22.722 5747.003 - 5772.209: 1.9689% ( 27) 00:08:22.722 5772.209 - 5797.415: 2.2399% ( 30) 00:08:22.722 5797.415 - 5822.622: 2.5560% ( 35) 00:08:22.722 5822.622 - 5847.828: 2.7908% ( 26) 00:08:22.722 5847.828 - 5873.034: 3.1973% ( 45) 00:08:22.722 5873.034 - 5898.240: 3.4863% ( 32) 00:08:22.722 5898.240 - 5923.446: 3.8927% ( 45) 00:08:22.722 5923.446 - 5948.652: 4.3624% ( 52) 00:08:22.722 5948.652 - 5973.858: 4.7778% ( 46) 00:08:22.722 5973.858 - 5999.065: 5.1120% ( 37) 00:08:22.723 5999.065 - 6024.271: 5.4552% ( 38) 00:08:22.723 6024.271 - 6049.477: 5.8797% ( 47) 00:08:22.723 6049.477 - 6074.683: 6.2500% ( 41) 00:08:22.723 6074.683 - 6099.889: 6.5842% ( 37) 00:08:22.723 6099.889 - 6125.095: 7.1261% ( 60) 00:08:22.723 6125.095 - 6150.302: 7.5054% ( 42) 00:08:22.723 6150.302 - 6175.508: 8.0925% ( 65) 00:08:22.723 6175.508 - 6200.714: 8.5892% ( 55) 00:08:22.723 6200.714 - 6225.920: 9.3931% ( 89) 00:08:22.723 6225.920 - 6251.126: 9.9892% ( 66) 00:08:22.723 6251.126 - 6276.332: 10.6304% ( 71) 00:08:22.723 6276.332 - 6301.538: 11.2717% ( 71) 00:08:22.723 6301.538 - 6326.745: 11.9400% ( 74) 00:08:22.723 6326.745 - 6351.951: 12.5000% ( 62) 00:08:22.723 6351.951 - 6377.157: 13.1593% ( 73) 00:08:22.723 6377.157 - 6402.363: 13.7915% ( 70) 00:08:22.723 6402.363 - 6427.569: 14.3967% ( 67) 00:08:22.723 6427.569 - 6452.775: 15.0470% ( 72) 00:08:22.723 6452.775 - 6503.188: 16.2934% ( 138) 00:08:22.723 6503.188 - 6553.600: 17.5578% ( 140) 00:08:22.723 6553.600 - 6604.012: 18.8313% ( 141) 00:08:22.723 6604.012 - 6654.425: 20.0686% ( 137) 00:08:22.723 6654.425 - 6704.837: 20.9718% ( 100) 00:08:22.723 6704.837 - 6755.249: 21.8389% ( 96) 00:08:22.723 6755.249 - 6805.662: 22.5524% ( 79) 00:08:22.723 6805.662 - 6856.074: 23.1936% ( 71) 00:08:22.723 6856.074 - 6906.486: 23.8620% ( 74) 00:08:22.723 6906.486 - 6956.898: 24.4942% ( 70) 00:08:22.723 6956.898 - 7007.311: 25.0813% ( 65) 00:08:22.723 7007.311 - 7057.723: 25.6051% ( 58) 00:08:22.723 7057.723 - 7108.135: 26.2825% ( 75) 00:08:22.723 7108.135 - 7158.548: 26.9418% ( 73) 00:08:22.723 7158.548 - 7208.960: 27.4476% ( 56) 00:08:22.723 7208.960 - 7259.372: 27.8811% ( 48) 00:08:22.723 7259.372 - 7309.785: 28.1431% ( 29) 00:08:22.723 7309.785 - 7360.197: 28.3508% ( 23) 00:08:22.723 7360.197 - 7410.609: 28.5043% ( 17) 00:08:22.723 7410.609 - 7461.022: 28.6308% ( 14) 00:08:22.723 7461.022 - 7511.434: 28.7572% ( 14) 00:08:22.723 7511.434 - 7561.846: 28.8385% ( 9) 00:08:22.723 7561.846 - 7612.258: 28.9108% ( 8) 00:08:22.723 7612.258 - 7662.671: 28.9740% ( 7) 00:08:22.723 7662.671 - 7713.083: 29.1185% ( 16) 00:08:22.723 7713.083 - 7763.495: 29.2088% ( 10) 00:08:22.723 7763.495 - 7813.908: 29.2901% ( 9) 00:08:22.723 7813.908 - 7864.320: 29.4165% ( 14) 00:08:22.723 7864.320 - 7914.732: 29.5159% ( 11) 00:08:22.723 7914.732 - 7965.145: 29.5882% ( 8) 00:08:22.723 7965.145 - 8015.557: 29.6604% ( 8) 00:08:22.723 8015.557 - 8065.969: 29.7959% ( 15) 00:08:22.723 8065.969 - 8116.382: 29.9043% ( 12) 00:08:22.723 8116.382 - 8166.794: 30.0217% ( 13) 00:08:22.723 8166.794 - 8217.206: 30.1210% ( 11) 00:08:22.723 8217.206 - 8267.618: 30.2204% ( 11) 00:08:22.723 8267.618 - 8318.031: 30.3107% ( 10) 00:08:22.723 8318.031 - 8368.443: 30.4010% ( 10) 00:08:22.723 8368.443 - 8418.855: 30.5094% ( 12) 00:08:22.723 8418.855 - 8469.268: 30.6087% ( 11) 00:08:22.723 8469.268 - 8519.680: 30.7262% ( 13) 00:08:22.723 8519.680 - 8570.092: 30.8526% ( 14) 00:08:22.723 8570.092 - 8620.505: 30.9610% ( 12) 00:08:22.723 8620.505 - 8670.917: 31.0874% ( 14) 00:08:22.723 8670.917 - 8721.329: 31.1687% ( 9) 00:08:22.723 8721.329 - 8771.742: 31.2861% ( 13) 00:08:22.723 8771.742 - 8822.154: 31.4126% ( 14) 00:08:22.723 8822.154 - 8872.566: 31.5300% ( 13) 00:08:22.723 8872.566 - 8922.978: 31.6564% ( 14) 00:08:22.723 8922.978 - 8973.391: 31.8009% ( 16) 00:08:22.723 8973.391 - 9023.803: 31.9454% ( 16) 00:08:22.723 9023.803 - 9074.215: 32.0990% ( 17) 00:08:22.723 9074.215 - 9124.628: 32.2616% ( 18) 00:08:22.723 9124.628 - 9175.040: 32.3699% ( 12) 00:08:22.723 9175.040 - 9225.452: 32.5325% ( 18) 00:08:22.723 9225.452 - 9275.865: 32.6228% ( 10) 00:08:22.723 9275.865 - 9326.277: 32.7222% ( 11) 00:08:22.723 9326.277 - 9376.689: 32.8396% ( 13) 00:08:22.723 9376.689 - 9427.102: 32.9209% ( 9) 00:08:22.723 9427.102 - 9477.514: 33.0293% ( 12) 00:08:22.723 9477.514 - 9527.926: 33.1105% ( 9) 00:08:22.723 9527.926 - 9578.338: 33.1918% ( 9) 00:08:22.723 9578.338 - 9628.751: 33.2912% ( 11) 00:08:22.723 9628.751 - 9679.163: 33.3815% ( 10) 00:08:22.723 9679.163 - 9729.575: 33.4176% ( 4) 00:08:22.723 9729.575 - 9779.988: 33.6073% ( 21) 00:08:22.723 9779.988 - 9830.400: 33.6796% ( 8) 00:08:22.723 9830.400 - 9880.812: 33.7970% ( 13) 00:08:22.723 9880.812 - 9931.225: 33.8783% ( 9) 00:08:22.723 9931.225 - 9981.637: 34.0408% ( 18) 00:08:22.723 9981.637 - 10032.049: 34.2576% ( 24) 00:08:22.723 10032.049 - 10082.462: 34.4202% ( 18) 00:08:22.723 10082.462 - 10132.874: 34.6911% ( 30) 00:08:22.723 10132.874 - 10183.286: 34.9892% ( 33) 00:08:22.723 10183.286 - 10233.698: 35.3866% ( 44) 00:08:22.723 10233.698 - 10284.111: 35.8201% ( 48) 00:08:22.723 10284.111 - 10334.523: 36.2897% ( 52) 00:08:22.723 10334.523 - 10384.935: 36.7052% ( 46) 00:08:22.723 10384.935 - 10435.348: 37.1297% ( 47) 00:08:22.723 10435.348 - 10485.760: 37.6535% ( 58) 00:08:22.723 10485.760 - 10536.172: 38.2225% ( 63) 00:08:22.723 10536.172 - 10586.585: 38.6832% ( 51) 00:08:22.723 10586.585 - 10636.997: 39.2522% ( 63) 00:08:22.723 10636.997 - 10687.409: 39.7218% ( 52) 00:08:22.723 10687.409 - 10737.822: 40.2818% ( 62) 00:08:22.723 10737.822 - 10788.234: 40.8779% ( 66) 00:08:22.723 10788.234 - 10838.646: 41.7991% ( 102) 00:08:22.723 10838.646 - 10889.058: 42.3681% ( 63) 00:08:22.723 10889.058 - 10939.471: 42.9191% ( 61) 00:08:22.723 10939.471 - 10989.883: 43.5513% ( 70) 00:08:22.723 10989.883 - 11040.295: 44.1655% ( 68) 00:08:22.723 11040.295 - 11090.708: 44.8609% ( 77) 00:08:22.723 11090.708 - 11141.120: 45.3667% ( 56) 00:08:22.723 11141.120 - 11191.532: 45.9357% ( 63) 00:08:22.723 11191.532 - 11241.945: 46.5679% ( 70) 00:08:22.723 11241.945 - 11292.357: 47.2182% ( 72) 00:08:22.723 11292.357 - 11342.769: 47.9046% ( 76) 00:08:22.723 11342.769 - 11393.182: 48.4014% ( 55) 00:08:22.723 11393.182 - 11443.594: 48.9433% ( 60) 00:08:22.723 11443.594 - 11494.006: 49.2504% ( 34) 00:08:22.723 11494.006 - 11544.418: 49.6839% ( 48) 00:08:22.723 11544.418 - 11594.831: 49.9187% ( 26) 00:08:22.723 11594.831 - 11645.243: 50.2619% ( 38) 00:08:22.723 11645.243 - 11695.655: 50.5148% ( 28) 00:08:22.723 11695.655 - 11746.068: 50.9032% ( 43) 00:08:22.723 11746.068 - 11796.480: 51.1922% ( 32) 00:08:22.723 11796.480 - 11846.892: 51.6799% ( 54) 00:08:22.723 11846.892 - 11897.305: 52.0412% ( 40) 00:08:22.723 11897.305 - 11947.717: 52.3573% ( 35) 00:08:22.723 11947.717 - 11998.129: 52.6824% ( 36) 00:08:22.723 11998.129 - 12048.542: 52.9805% ( 33) 00:08:22.723 12048.542 - 12098.954: 53.3327% ( 39) 00:08:22.723 12098.954 - 12149.366: 53.6759% ( 38) 00:08:22.723 12149.366 - 12199.778: 53.9379% ( 29) 00:08:22.723 12199.778 - 12250.191: 54.3172% ( 42) 00:08:22.723 12250.191 - 12300.603: 54.7056% ( 43) 00:08:22.723 12300.603 - 12351.015: 54.9404% ( 26) 00:08:22.723 12351.015 - 12401.428: 55.3649% ( 47) 00:08:22.723 12401.428 - 12451.840: 55.6087% ( 27) 00:08:22.723 12451.840 - 12502.252: 56.0694% ( 51) 00:08:22.723 12502.252 - 12552.665: 56.3584% ( 32) 00:08:22.723 12552.665 - 12603.077: 56.6926% ( 37) 00:08:22.723 12603.077 - 12653.489: 57.0087% ( 35) 00:08:22.723 12653.489 - 12703.902: 57.3248% ( 35) 00:08:22.723 12703.902 - 12754.314: 57.5596% ( 26) 00:08:22.723 12754.314 - 12804.726: 57.7944% ( 26) 00:08:22.723 12804.726 - 12855.138: 58.2009% ( 45) 00:08:22.723 12855.138 - 12905.551: 58.4718% ( 30) 00:08:22.723 12905.551 - 13006.375: 59.1673% ( 77) 00:08:22.723 13006.375 - 13107.200: 59.8356% ( 74) 00:08:22.723 13107.200 - 13208.025: 60.6214% ( 87) 00:08:22.723 13208.025 - 13308.849: 61.4794% ( 95) 00:08:22.723 13308.849 - 13409.674: 62.4729% ( 110) 00:08:22.723 13409.674 - 13510.498: 63.3671% ( 99) 00:08:22.723 13510.498 - 13611.323: 64.2702% ( 100) 00:08:22.723 13611.323 - 13712.148: 65.2908% ( 113) 00:08:22.723 13712.148 - 13812.972: 66.6546% ( 151) 00:08:22.723 13812.972 - 13913.797: 68.0726% ( 157) 00:08:22.723 13913.797 - 14014.622: 69.3100% ( 137) 00:08:22.723 14014.622 - 14115.446: 70.5654% ( 139) 00:08:22.723 14115.446 - 14216.271: 71.8840% ( 146) 00:08:22.723 14216.271 - 14317.095: 73.1395% ( 139) 00:08:22.723 14317.095 - 14417.920: 74.2233% ( 120) 00:08:22.723 14417.920 - 14518.745: 75.0813% ( 95) 00:08:22.723 14518.745 - 14619.569: 76.1651% ( 120) 00:08:22.723 14619.569 - 14720.394: 76.9418% ( 86) 00:08:22.723 14720.394 - 14821.218: 77.8992% ( 106) 00:08:22.723 14821.218 - 14922.043: 78.5585% ( 73) 00:08:22.723 14922.043 - 15022.868: 79.2720% ( 79) 00:08:22.723 15022.868 - 15123.692: 79.8230% ( 61) 00:08:22.723 15123.692 - 15224.517: 80.3739% ( 61) 00:08:22.723 15224.517 - 15325.342: 80.9700% ( 66) 00:08:22.723 15325.342 - 15426.166: 81.5300% ( 62) 00:08:22.723 15426.166 - 15526.991: 82.1803% ( 72) 00:08:22.723 15526.991 - 15627.815: 82.7583% ( 64) 00:08:22.723 15627.815 - 15728.640: 83.2912% ( 59) 00:08:22.723 15728.640 - 15829.465: 83.9686% ( 75) 00:08:22.723 15829.465 - 15930.289: 84.7092% ( 82) 00:08:22.723 15930.289 - 16031.114: 85.6304% ( 102) 00:08:22.723 16031.114 - 16131.938: 86.3439% ( 79) 00:08:22.723 16131.938 - 16232.763: 87.1387% ( 88) 00:08:22.723 16232.763 - 16333.588: 87.8793% ( 82) 00:08:22.723 16333.588 - 16434.412: 88.8638% ( 109) 00:08:22.723 16434.412 - 16535.237: 89.6947% ( 92) 00:08:22.723 16535.237 - 16636.062: 90.4624% ( 85) 00:08:22.723 16636.062 - 16736.886: 91.1127% ( 72) 00:08:22.723 16736.886 - 16837.711: 91.7178% ( 67) 00:08:22.723 16837.711 - 16938.535: 92.2959% ( 64) 00:08:22.723 16938.535 - 17039.360: 92.6481% ( 39) 00:08:22.723 17039.360 - 17140.185: 93.0365% ( 43) 00:08:22.723 17140.185 - 17241.009: 93.3978% ( 40) 00:08:22.723 17241.009 - 17341.834: 93.7952% ( 44) 00:08:22.723 17341.834 - 17442.658: 94.1926% ( 44) 00:08:22.723 17442.658 - 17543.483: 94.5087% ( 35) 00:08:22.724 17543.483 - 17644.308: 94.7616% ( 28) 00:08:22.724 17644.308 - 17745.132: 95.0235% ( 29) 00:08:22.724 17745.132 - 17845.957: 95.2944% ( 30) 00:08:22.724 17845.957 - 17946.782: 95.4931% ( 22) 00:08:22.724 17946.782 - 18047.606: 95.6647% ( 19) 00:08:22.724 18047.606 - 18148.431: 95.8544% ( 21) 00:08:22.724 18148.431 - 18249.255: 96.0441% ( 21) 00:08:22.724 18249.255 - 18350.080: 96.1976% ( 17) 00:08:22.724 18350.080 - 18450.905: 96.3602% ( 18) 00:08:22.724 18450.905 - 18551.729: 96.4776% ( 13) 00:08:22.724 18551.729 - 18652.554: 96.5770% ( 11) 00:08:22.724 18652.554 - 18753.378: 96.7034% ( 14) 00:08:22.724 18753.378 - 18854.203: 96.8479% ( 16) 00:08:22.724 18854.203 - 18955.028: 96.9743% ( 14) 00:08:22.724 18955.028 - 19055.852: 97.1369% ( 18) 00:08:22.724 19055.852 - 19156.677: 97.2905% ( 17) 00:08:22.724 19156.677 - 19257.502: 97.3988% ( 12) 00:08:22.724 19257.502 - 19358.326: 97.5343% ( 15) 00:08:22.724 19358.326 - 19459.151: 97.7691% ( 26) 00:08:22.724 19459.151 - 19559.975: 97.8956% ( 14) 00:08:22.724 19559.975 - 19660.800: 98.0040% ( 12) 00:08:22.724 19660.800 - 19761.625: 98.0672% ( 7) 00:08:22.724 19761.625 - 19862.449: 98.1756% ( 12) 00:08:22.724 19862.449 - 19963.274: 98.2840% ( 12) 00:08:22.724 19963.274 - 20064.098: 98.3652% ( 9) 00:08:22.724 20064.098 - 20164.923: 98.4104% ( 5) 00:08:22.724 20164.923 - 20265.748: 98.4375% ( 3) 00:08:22.724 20265.748 - 20366.572: 98.5007% ( 7) 00:08:22.724 20366.572 - 20467.397: 98.5278% ( 3) 00:08:22.724 20467.397 - 20568.222: 98.5820% ( 6) 00:08:22.724 20568.222 - 20669.046: 98.6091% ( 3) 00:08:22.724 20669.046 - 20769.871: 98.6362% ( 3) 00:08:22.724 20769.871 - 20870.695: 98.6994% ( 7) 00:08:22.724 20870.695 - 20971.520: 98.7265% ( 3) 00:08:22.724 20971.520 - 21072.345: 98.7717% ( 5) 00:08:22.724 21072.345 - 21173.169: 98.7897% ( 2) 00:08:22.724 21173.169 - 21273.994: 98.8439% ( 6) 00:08:22.724 22383.065 - 22483.889: 98.8620% ( 2) 00:08:22.724 22483.889 - 22584.714: 98.9162% ( 6) 00:08:22.724 22584.714 - 22685.538: 98.9613% ( 5) 00:08:22.724 22685.538 - 22786.363: 99.0155% ( 6) 00:08:22.724 22786.363 - 22887.188: 99.0607% ( 5) 00:08:22.724 22887.188 - 22988.012: 99.1239% ( 7) 00:08:22.724 22988.012 - 23088.837: 99.2052% ( 9) 00:08:22.724 23088.837 - 23189.662: 99.2323% ( 3) 00:08:22.724 23189.662 - 23290.486: 99.2775% ( 5) 00:08:22.724 23290.486 - 23391.311: 99.3497% ( 8) 00:08:22.724 23391.311 - 23492.135: 99.3858% ( 4) 00:08:22.724 23492.135 - 23592.960: 99.4220% ( 4) 00:08:22.724 31053.982 - 31255.631: 99.4942% ( 8) 00:08:22.724 31255.631 - 31457.280: 99.5755% ( 9) 00:08:22.724 31457.280 - 31658.929: 99.6116% ( 4) 00:08:22.724 31658.929 - 31860.578: 99.7832% ( 19) 00:08:22.724 31860.578 - 32062.228: 99.8284% ( 5) 00:08:22.724 32062.228 - 32263.877: 99.9007% ( 8) 00:08:22.724 32263.877 - 32465.526: 99.9639% ( 7) 00:08:22.724 32465.526 - 32667.175: 100.0000% ( 4) 00:08:22.724 00:08:22.724 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:22.724 ============================================================================== 00:08:22.724 Range in us Cumulative IO count 00:08:22.724 3932.160 - 3957.366: 0.0181% ( 2) 00:08:22.724 3957.366 - 3982.572: 0.0361% ( 2) 00:08:22.724 3982.572 - 4007.778: 0.0542% ( 2) 00:08:22.724 4007.778 - 4032.985: 0.0813% ( 3) 00:08:22.724 4032.985 - 4058.191: 0.1084% ( 3) 00:08:22.724 4058.191 - 4083.397: 0.1355% ( 3) 00:08:22.724 4083.397 - 4108.603: 0.1626% ( 3) 00:08:22.724 4108.603 - 4133.809: 0.1897% ( 3) 00:08:22.724 4133.809 - 4159.015: 0.2077% ( 2) 00:08:22.724 4159.015 - 4184.222: 0.2439% ( 4) 00:08:22.724 4184.222 - 4209.428: 0.2619% ( 2) 00:08:22.724 4209.428 - 4234.634: 0.2800% ( 2) 00:08:22.724 4234.634 - 4259.840: 0.3071% ( 3) 00:08:22.724 4259.840 - 4285.046: 0.3251% ( 2) 00:08:22.724 4285.046 - 4310.252: 0.3522% ( 3) 00:08:22.724 4310.252 - 4335.458: 0.3703% ( 2) 00:08:22.724 4335.458 - 4360.665: 0.3974% ( 3) 00:08:22.724 4360.665 - 4385.871: 0.4155% ( 2) 00:08:22.724 4385.871 - 4411.077: 0.4335% ( 2) 00:08:22.724 4411.077 - 4436.283: 0.4606% ( 3) 00:08:22.724 4436.283 - 4461.489: 0.4787% ( 2) 00:08:22.724 4461.489 - 4486.695: 0.5058% ( 3) 00:08:22.724 4486.695 - 4511.902: 0.5238% ( 2) 00:08:22.724 4511.902 - 4537.108: 0.5419% ( 2) 00:08:22.724 4537.108 - 4562.314: 0.5600% ( 2) 00:08:22.724 4562.314 - 4587.520: 0.5780% ( 2) 00:08:22.724 5595.766 - 5620.972: 0.6322% ( 6) 00:08:22.724 5620.972 - 5646.178: 0.6864% ( 6) 00:08:22.724 5646.178 - 5671.385: 0.7045% ( 2) 00:08:22.724 5671.385 - 5696.591: 0.7587% ( 6) 00:08:22.724 5696.591 - 5721.797: 0.8129% ( 6) 00:08:22.724 5721.797 - 5747.003: 0.9483% ( 15) 00:08:22.724 5747.003 - 5772.209: 1.1741% ( 25) 00:08:22.724 5772.209 - 5797.415: 1.6167% ( 49) 00:08:22.724 5797.415 - 5822.622: 1.8786% ( 29) 00:08:22.724 5822.622 - 5847.828: 2.3031% ( 47) 00:08:22.724 5847.828 - 5873.034: 2.8089% ( 56) 00:08:22.724 5873.034 - 5898.240: 3.1250% ( 35) 00:08:22.724 5898.240 - 5923.446: 3.4592% ( 37) 00:08:22.724 5923.446 - 5948.652: 3.8204% ( 40) 00:08:22.724 5948.652 - 5973.858: 4.2540% ( 48) 00:08:22.724 5973.858 - 5999.065: 4.7417% ( 54) 00:08:22.724 5999.065 - 6024.271: 5.1842% ( 49) 00:08:22.724 6024.271 - 6049.477: 5.6449% ( 51) 00:08:22.724 6049.477 - 6074.683: 6.0242% ( 42) 00:08:22.724 6074.683 - 6099.889: 6.5119% ( 54) 00:08:22.724 6099.889 - 6125.095: 6.9725% ( 51) 00:08:22.724 6125.095 - 6150.302: 7.4061% ( 48) 00:08:22.724 6150.302 - 6175.508: 7.8757% ( 52) 00:08:22.724 6175.508 - 6200.714: 8.3815% ( 56) 00:08:22.724 6200.714 - 6225.920: 8.8963% ( 57) 00:08:22.724 6225.920 - 6251.126: 9.4653% ( 63) 00:08:22.724 6251.126 - 6276.332: 10.1517% ( 76) 00:08:22.724 6276.332 - 6301.538: 10.7749% ( 69) 00:08:22.724 6301.538 - 6326.745: 11.5336% ( 84) 00:08:22.724 6326.745 - 6351.951: 12.3103% ( 86) 00:08:22.724 6351.951 - 6377.157: 13.0419% ( 81) 00:08:22.724 6377.157 - 6402.363: 13.6290% ( 65) 00:08:22.724 6402.363 - 6427.569: 14.2973% ( 74) 00:08:22.724 6427.569 - 6452.775: 15.0379% ( 82) 00:08:22.724 6452.775 - 6503.188: 16.5101% ( 163) 00:08:22.724 6503.188 - 6553.600: 17.8107% ( 144) 00:08:22.724 6553.600 - 6604.012: 19.1384% ( 147) 00:08:22.724 6604.012 - 6654.425: 20.2041% ( 118) 00:08:22.724 6654.425 - 6704.837: 21.1163% ( 101) 00:08:22.724 6704.837 - 6755.249: 21.9021% ( 87) 00:08:22.724 6755.249 - 6805.662: 22.6608% ( 84) 00:08:22.724 6805.662 - 6856.074: 23.3652% ( 78) 00:08:22.724 6856.074 - 6906.486: 24.0697% ( 78) 00:08:22.724 6906.486 - 6956.898: 24.7200% ( 72) 00:08:22.724 6956.898 - 7007.311: 25.4516% ( 81) 00:08:22.724 7007.311 - 7057.723: 26.1199% ( 74) 00:08:22.724 7057.723 - 7108.135: 26.8064% ( 76) 00:08:22.724 7108.135 - 7158.548: 27.3844% ( 64) 00:08:22.724 7158.548 - 7208.960: 27.7276% ( 38) 00:08:22.724 7208.960 - 7259.372: 28.0437% ( 35) 00:08:22.724 7259.372 - 7309.785: 28.2243% ( 20) 00:08:22.724 7309.785 - 7360.197: 28.3960% ( 19) 00:08:22.724 7360.197 - 7410.609: 28.5043% ( 12) 00:08:22.724 7410.609 - 7461.022: 28.6127% ( 12) 00:08:22.724 7461.022 - 7511.434: 28.7121% ( 11) 00:08:22.724 7511.434 - 7561.846: 28.8024% ( 10) 00:08:22.724 7561.846 - 7612.258: 28.9108% ( 12) 00:08:22.724 7612.258 - 7662.671: 29.0011% ( 10) 00:08:22.724 7662.671 - 7713.083: 29.0914% ( 10) 00:08:22.724 7713.083 - 7763.495: 29.1637% ( 8) 00:08:22.724 7763.495 - 7813.908: 29.2630% ( 11) 00:08:22.724 7813.908 - 7864.320: 29.3624% ( 11) 00:08:22.724 7864.320 - 7914.732: 29.4707% ( 12) 00:08:22.724 7914.732 - 7965.145: 29.5340% ( 7) 00:08:22.724 7965.145 - 8015.557: 29.6062% ( 8) 00:08:22.724 8015.557 - 8065.969: 29.6694% ( 7) 00:08:22.724 8065.969 - 8116.382: 29.7417% ( 8) 00:08:22.724 8116.382 - 8166.794: 29.8320% ( 10) 00:08:22.724 8166.794 - 8217.206: 29.9043% ( 8) 00:08:22.724 8217.206 - 8267.618: 29.9946% ( 10) 00:08:22.724 8267.618 - 8318.031: 30.1391% ( 16) 00:08:22.724 8318.031 - 8368.443: 30.3017% ( 18) 00:08:22.724 8368.443 - 8418.855: 30.4552% ( 17) 00:08:22.724 8418.855 - 8469.268: 30.5907% ( 15) 00:08:22.724 8469.268 - 8519.680: 30.7442% ( 17) 00:08:22.724 8519.680 - 8570.092: 30.8978% ( 17) 00:08:22.724 8570.092 - 8620.505: 31.0423% ( 16) 00:08:22.724 8620.505 - 8670.917: 31.1777% ( 15) 00:08:22.724 8670.917 - 8721.329: 31.3042% ( 14) 00:08:22.724 8721.329 - 8771.742: 31.4306% ( 14) 00:08:22.724 8771.742 - 8822.154: 31.5480% ( 13) 00:08:22.724 8822.154 - 8872.566: 31.7919% ( 27) 00:08:22.724 8872.566 - 8922.978: 31.9545% ( 18) 00:08:22.724 8922.978 - 8973.391: 32.1261% ( 19) 00:08:22.724 8973.391 - 9023.803: 32.2616% ( 15) 00:08:22.724 9023.803 - 9074.215: 32.3970% ( 15) 00:08:22.724 9074.215 - 9124.628: 32.5145% ( 13) 00:08:22.724 9124.628 - 9175.040: 32.6228% ( 12) 00:08:22.724 9175.040 - 9225.452: 32.7222% ( 11) 00:08:22.724 9225.452 - 9275.865: 32.7854% ( 7) 00:08:22.724 9275.865 - 9326.277: 32.8667% ( 9) 00:08:22.724 9326.277 - 9376.689: 32.9570% ( 10) 00:08:22.724 9376.689 - 9427.102: 33.0473% ( 10) 00:08:22.724 9427.102 - 9477.514: 33.1196% ( 8) 00:08:22.724 9477.514 - 9527.926: 33.1918% ( 8) 00:08:22.724 9527.926 - 9578.338: 33.2731% ( 9) 00:08:22.724 9578.338 - 9628.751: 33.3363% ( 7) 00:08:22.724 9628.751 - 9679.163: 33.4447% ( 12) 00:08:22.724 9679.163 - 9729.575: 33.5170% ( 8) 00:08:22.724 9729.575 - 9779.988: 33.5892% ( 8) 00:08:22.724 9779.988 - 9830.400: 33.6525% ( 7) 00:08:22.724 9830.400 - 9880.812: 33.7699% ( 13) 00:08:22.724 9880.812 - 9931.225: 33.8602% ( 10) 00:08:22.724 9931.225 - 9981.637: 33.9776% ( 13) 00:08:22.724 9981.637 - 10032.049: 34.0860% ( 12) 00:08:22.724 10032.049 - 10082.462: 34.2124% ( 14) 00:08:22.725 10082.462 - 10132.874: 34.4743% ( 29) 00:08:22.725 10132.874 - 10183.286: 34.7272% ( 28) 00:08:22.725 10183.286 - 10233.698: 35.0253% ( 33) 00:08:22.725 10233.698 - 10284.111: 35.3866% ( 40) 00:08:22.725 10284.111 - 10334.523: 35.9104% ( 58) 00:08:22.725 10334.523 - 10384.935: 36.3981% ( 54) 00:08:22.725 10384.935 - 10435.348: 36.9491% ( 61) 00:08:22.725 10435.348 - 10485.760: 37.5903% ( 71) 00:08:22.725 10485.760 - 10536.172: 38.2135% ( 69) 00:08:22.725 10536.172 - 10586.585: 38.7825% ( 63) 00:08:22.725 10586.585 - 10636.997: 39.4689% ( 76) 00:08:22.725 10636.997 - 10687.409: 40.1553% ( 76) 00:08:22.725 10687.409 - 10737.822: 40.9501% ( 88) 00:08:22.725 10737.822 - 10788.234: 41.7269% ( 86) 00:08:22.725 10788.234 - 10838.646: 42.5036% ( 86) 00:08:22.725 10838.646 - 10889.058: 43.2803% ( 86) 00:08:22.725 10889.058 - 10939.471: 44.1022% ( 91) 00:08:22.725 10939.471 - 10989.883: 44.8428% ( 82) 00:08:22.725 10989.883 - 11040.295: 45.5925% ( 83) 00:08:22.725 11040.295 - 11090.708: 46.3150% ( 80) 00:08:22.725 11090.708 - 11141.120: 47.0285% ( 79) 00:08:22.725 11141.120 - 11191.532: 47.7691% ( 82) 00:08:22.725 11191.532 - 11241.945: 48.4014% ( 70) 00:08:22.725 11241.945 - 11292.357: 49.0246% ( 69) 00:08:22.725 11292.357 - 11342.769: 49.6297% ( 67) 00:08:22.725 11342.769 - 11393.182: 50.0632% ( 48) 00:08:22.725 11393.182 - 11443.594: 50.4877% ( 47) 00:08:22.725 11443.594 - 11494.006: 50.8309% ( 38) 00:08:22.725 11494.006 - 11544.418: 51.1651% ( 37) 00:08:22.725 11544.418 - 11594.831: 51.4090% ( 27) 00:08:22.725 11594.831 - 11645.243: 51.6438% ( 26) 00:08:22.725 11645.243 - 11695.655: 51.8786% ( 26) 00:08:22.725 11695.655 - 11746.068: 52.0954% ( 24) 00:08:22.725 11746.068 - 11796.480: 52.2760% ( 20) 00:08:22.725 11796.480 - 11846.892: 52.4837% ( 23) 00:08:22.725 11846.892 - 11897.305: 52.7095% ( 25) 00:08:22.725 11897.305 - 11947.717: 52.8902% ( 20) 00:08:22.725 11947.717 - 11998.129: 53.0618% ( 19) 00:08:22.725 11998.129 - 12048.542: 53.2605% ( 22) 00:08:22.725 12048.542 - 12098.954: 53.4682% ( 23) 00:08:22.725 12098.954 - 12149.366: 53.7030% ( 26) 00:08:22.725 12149.366 - 12199.778: 53.9288% ( 25) 00:08:22.725 12199.778 - 12250.191: 54.1727% ( 27) 00:08:22.725 12250.191 - 12300.603: 54.4075% ( 26) 00:08:22.725 12300.603 - 12351.015: 54.6604% ( 28) 00:08:22.725 12351.015 - 12401.428: 54.9585% ( 33) 00:08:22.725 12401.428 - 12451.840: 55.2384% ( 31) 00:08:22.725 12451.840 - 12502.252: 55.4733% ( 26) 00:08:22.725 12502.252 - 12552.665: 55.7171% ( 27) 00:08:22.725 12552.665 - 12603.077: 55.9971% ( 31) 00:08:22.725 12603.077 - 12653.489: 56.3584% ( 40) 00:08:22.725 12653.489 - 12703.902: 56.6745% ( 35) 00:08:22.725 12703.902 - 12754.314: 56.9996% ( 36) 00:08:22.725 12754.314 - 12804.726: 57.3067% ( 34) 00:08:22.725 12804.726 - 12855.138: 57.6319% ( 36) 00:08:22.725 12855.138 - 12905.551: 57.9389% ( 34) 00:08:22.725 12905.551 - 13006.375: 58.4628% ( 58) 00:08:22.725 13006.375 - 13107.200: 59.0047% ( 60) 00:08:22.725 13107.200 - 13208.025: 59.5195% ( 57) 00:08:22.725 13208.025 - 13308.849: 60.1337% ( 68) 00:08:22.725 13308.849 - 13409.674: 60.7207% ( 65) 00:08:22.725 13409.674 - 13510.498: 61.5788% ( 95) 00:08:22.725 13510.498 - 13611.323: 62.5361% ( 106) 00:08:22.725 13611.323 - 13712.148: 63.7012% ( 129) 00:08:22.725 13712.148 - 13812.972: 64.9928% ( 143) 00:08:22.725 13812.972 - 13913.797: 66.4740% ( 164) 00:08:22.725 13913.797 - 14014.622: 68.2713% ( 199) 00:08:22.725 14014.622 - 14115.446: 69.8970% ( 180) 00:08:22.725 14115.446 - 14216.271: 71.3692% ( 163) 00:08:22.725 14216.271 - 14317.095: 72.7601% ( 154) 00:08:22.725 14317.095 - 14417.920: 74.1420% ( 153) 00:08:22.725 14417.920 - 14518.745: 75.3613% ( 135) 00:08:22.725 14518.745 - 14619.569: 76.5083% ( 127) 00:08:22.725 14619.569 - 14720.394: 77.4386% ( 103) 00:08:22.725 14720.394 - 14821.218: 78.2153% ( 86) 00:08:22.725 14821.218 - 14922.043: 78.8927% ( 75) 00:08:22.725 14922.043 - 15022.868: 79.4978% ( 67) 00:08:22.725 15022.868 - 15123.692: 80.0578% ( 62) 00:08:22.725 15123.692 - 15224.517: 80.6629% ( 67) 00:08:22.725 15224.517 - 15325.342: 81.2048% ( 60) 00:08:22.725 15325.342 - 15426.166: 81.7197% ( 57) 00:08:22.725 15426.166 - 15526.991: 82.4061% ( 76) 00:08:22.725 15526.991 - 15627.815: 83.0022% ( 66) 00:08:22.725 15627.815 - 15728.640: 83.6073% ( 67) 00:08:22.725 15728.640 - 15829.465: 84.2124% ( 67) 00:08:22.725 15829.465 - 15930.289: 84.9440% ( 81) 00:08:22.725 15930.289 - 16031.114: 85.8111% ( 96) 00:08:22.725 16031.114 - 16131.938: 86.7233% ( 101) 00:08:22.725 16131.938 - 16232.763: 87.5993% ( 97) 00:08:22.725 16232.763 - 16333.588: 88.4664% ( 96) 00:08:22.725 16333.588 - 16434.412: 89.1618% ( 77) 00:08:22.725 16434.412 - 16535.237: 89.8754% ( 79) 00:08:22.725 16535.237 - 16636.062: 90.7695% ( 99) 00:08:22.725 16636.062 - 16736.886: 91.6366% ( 96) 00:08:22.725 16736.886 - 16837.711: 92.4043% ( 85) 00:08:22.725 16837.711 - 16938.535: 92.9823% ( 64) 00:08:22.725 16938.535 - 17039.360: 93.4971% ( 57) 00:08:22.725 17039.360 - 17140.185: 93.9216% ( 47) 00:08:22.725 17140.185 - 17241.009: 94.2648% ( 38) 00:08:22.725 17241.009 - 17341.834: 94.4816% ( 24) 00:08:22.725 17341.834 - 17442.658: 94.6983% ( 24) 00:08:22.725 17442.658 - 17543.483: 94.8609% ( 18) 00:08:22.725 17543.483 - 17644.308: 95.0145% ( 17) 00:08:22.725 17644.308 - 17745.132: 95.1048% ( 10) 00:08:22.725 17745.132 - 17845.957: 95.1770% ( 8) 00:08:22.725 17845.957 - 17946.782: 95.2493% ( 8) 00:08:22.725 17946.782 - 18047.606: 95.3848% ( 15) 00:08:22.725 18047.606 - 18148.431: 95.4931% ( 12) 00:08:22.725 18148.431 - 18249.255: 95.6286% ( 15) 00:08:22.725 18249.255 - 18350.080: 95.7551% ( 14) 00:08:22.725 18350.080 - 18450.905: 95.9176% ( 18) 00:08:22.725 18450.905 - 18551.729: 96.0892% ( 19) 00:08:22.725 18551.729 - 18652.554: 96.2789% ( 21) 00:08:22.725 18652.554 - 18753.378: 96.5228% ( 27) 00:08:22.725 18753.378 - 18854.203: 96.7576% ( 26) 00:08:22.725 18854.203 - 18955.028: 96.9473% ( 21) 00:08:22.725 18955.028 - 19055.852: 97.1279% ( 20) 00:08:22.725 19055.852 - 19156.677: 97.2453% ( 13) 00:08:22.725 19156.677 - 19257.502: 97.3808% ( 15) 00:08:22.725 19257.502 - 19358.326: 97.5343% ( 17) 00:08:22.725 19358.326 - 19459.151: 97.6517% ( 13) 00:08:22.725 19459.151 - 19559.975: 97.7782% ( 14) 00:08:22.725 19559.975 - 19660.800: 97.9137% ( 15) 00:08:22.725 19660.800 - 19761.625: 98.0401% ( 14) 00:08:22.725 19761.625 - 19862.449: 98.1665% ( 14) 00:08:22.725 19862.449 - 19963.274: 98.3020% ( 15) 00:08:22.725 19963.274 - 20064.098: 98.4014% ( 11) 00:08:22.725 20064.098 - 20164.923: 98.5278% ( 14) 00:08:22.725 20164.923 - 20265.748: 98.6091% ( 9) 00:08:22.725 20265.748 - 20366.572: 98.6723% ( 7) 00:08:22.725 20366.572 - 20467.397: 98.7265% ( 6) 00:08:22.725 20467.397 - 20568.222: 98.7807% ( 6) 00:08:22.725 20568.222 - 20669.046: 98.8078% ( 3) 00:08:22.725 20669.046 - 20769.871: 98.8259% ( 2) 00:08:22.725 20769.871 - 20870.695: 98.8439% ( 2) 00:08:22.725 22483.889 - 22584.714: 98.8801% ( 4) 00:08:22.725 22584.714 - 22685.538: 98.9342% ( 6) 00:08:22.725 22685.538 - 22786.363: 98.9975% ( 7) 00:08:22.725 22786.363 - 22887.188: 99.0426% ( 5) 00:08:22.725 22887.188 - 22988.012: 99.1059% ( 7) 00:08:22.725 22988.012 - 23088.837: 99.1600% ( 6) 00:08:22.725 23088.837 - 23189.662: 99.2233% ( 7) 00:08:22.725 23189.662 - 23290.486: 99.2775% ( 6) 00:08:22.725 23290.486 - 23391.311: 99.3316% ( 6) 00:08:22.725 23391.311 - 23492.135: 99.3949% ( 7) 00:08:22.725 23492.135 - 23592.960: 99.4220% ( 3) 00:08:22.725 30852.332 - 31053.982: 99.4762% ( 6) 00:08:22.725 31053.982 - 31255.631: 99.5755% ( 11) 00:08:22.725 31255.631 - 31457.280: 99.6658% ( 10) 00:08:22.725 31457.280 - 31658.929: 99.7652% ( 11) 00:08:22.725 31658.929 - 31860.578: 99.8555% ( 10) 00:08:22.725 31860.578 - 32062.228: 99.9548% ( 11) 00:08:22.725 32062.228 - 32263.877: 100.0000% ( 5) 00:08:22.725 00:08:22.725 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:22.725 ============================================================================== 00:08:22.725 Range in us Cumulative IO count 00:08:22.725 3428.037 - 3453.243: 0.0181% ( 2) 00:08:22.725 3453.243 - 3478.449: 0.0452% ( 3) 00:08:22.725 3478.449 - 3503.655: 0.0632% ( 2) 00:08:22.725 3503.655 - 3528.862: 0.0903% ( 3) 00:08:22.725 3528.862 - 3554.068: 0.1174% ( 3) 00:08:22.725 3554.068 - 3579.274: 0.1535% ( 4) 00:08:22.725 3579.274 - 3604.480: 0.1806% ( 3) 00:08:22.726 3604.480 - 3629.686: 0.1987% ( 2) 00:08:22.726 3629.686 - 3654.892: 0.2258% ( 3) 00:08:22.726 3654.892 - 3680.098: 0.2529% ( 3) 00:08:22.726 3680.098 - 3705.305: 0.2800% ( 3) 00:08:22.726 3705.305 - 3730.511: 0.3071% ( 3) 00:08:22.726 3730.511 - 3755.717: 0.3251% ( 2) 00:08:22.726 3755.717 - 3780.923: 0.3522% ( 3) 00:08:22.726 3780.923 - 3806.129: 0.3793% ( 3) 00:08:22.726 3806.129 - 3831.335: 0.4064% ( 3) 00:08:22.726 3831.335 - 3856.542: 0.4335% ( 3) 00:08:22.726 3856.542 - 3881.748: 0.4606% ( 3) 00:08:22.726 3881.748 - 3906.954: 0.4877% ( 3) 00:08:22.726 3906.954 - 3932.160: 0.5148% ( 3) 00:08:22.726 3932.160 - 3957.366: 0.5419% ( 3) 00:08:22.726 3957.366 - 3982.572: 0.5690% ( 3) 00:08:22.726 3982.572 - 4007.778: 0.5780% ( 1) 00:08:22.726 5142.055 - 5167.262: 0.6322% ( 6) 00:08:22.726 5167.262 - 5192.468: 0.6503% ( 2) 00:08:22.726 5192.468 - 5217.674: 0.6684% ( 2) 00:08:22.726 5217.674 - 5242.880: 0.6864% ( 2) 00:08:22.726 5242.880 - 5268.086: 0.7135% ( 3) 00:08:22.726 5268.086 - 5293.292: 0.7406% ( 3) 00:08:22.726 5293.292 - 5318.498: 0.7587% ( 2) 00:08:22.726 5318.498 - 5343.705: 0.7767% ( 2) 00:08:22.726 5343.705 - 5368.911: 0.8038% ( 3) 00:08:22.726 5368.911 - 5394.117: 0.8309% ( 3) 00:08:22.726 5394.117 - 5419.323: 0.8490% ( 2) 00:08:22.726 5419.323 - 5444.529: 0.8761% ( 3) 00:08:22.726 5444.529 - 5469.735: 0.8941% ( 2) 00:08:22.726 5469.735 - 5494.942: 0.9212% ( 3) 00:08:22.726 5494.942 - 5520.148: 0.9483% ( 3) 00:08:22.726 5520.148 - 5545.354: 0.9664% ( 2) 00:08:22.726 5545.354 - 5570.560: 0.9935% ( 3) 00:08:22.726 5570.560 - 5595.766: 1.0116% ( 2) 00:08:22.726 5595.766 - 5620.972: 1.0387% ( 3) 00:08:22.726 5620.972 - 5646.178: 1.0748% ( 4) 00:08:22.726 5646.178 - 5671.385: 1.1561% ( 9) 00:08:22.726 5671.385 - 5696.591: 1.2645% ( 12) 00:08:22.726 5696.591 - 5721.797: 1.4451% ( 20) 00:08:22.726 5721.797 - 5747.003: 1.5986% ( 17) 00:08:22.726 5747.003 - 5772.209: 1.8154% ( 24) 00:08:22.726 5772.209 - 5797.415: 2.1586% ( 38) 00:08:22.726 5797.415 - 5822.622: 2.6102% ( 50) 00:08:22.726 5822.622 - 5847.828: 2.8450% ( 26) 00:08:22.726 5847.828 - 5873.034: 3.1521% ( 34) 00:08:22.726 5873.034 - 5898.240: 3.5224% ( 41) 00:08:22.726 5898.240 - 5923.446: 3.7843% ( 29) 00:08:22.726 5923.446 - 5948.652: 4.2540% ( 52) 00:08:22.726 5948.652 - 5973.858: 4.6965% ( 49) 00:08:22.726 5973.858 - 5999.065: 5.1210% ( 47) 00:08:22.726 5999.065 - 6024.271: 5.5184% ( 44) 00:08:22.726 6024.271 - 6049.477: 5.9971% ( 53) 00:08:22.726 6049.477 - 6074.683: 6.4758% ( 53) 00:08:22.726 6074.683 - 6099.889: 6.9364% ( 51) 00:08:22.726 6099.889 - 6125.095: 7.4332% ( 55) 00:08:22.726 6125.095 - 6150.302: 7.8667% ( 48) 00:08:22.726 6150.302 - 6175.508: 8.3092% ( 49) 00:08:22.726 6175.508 - 6200.714: 8.7518% ( 49) 00:08:22.726 6200.714 - 6225.920: 9.2847% ( 59) 00:08:22.726 6225.920 - 6251.126: 9.8717% ( 65) 00:08:22.726 6251.126 - 6276.332: 10.5582% ( 76) 00:08:22.726 6276.332 - 6301.538: 11.2355% ( 75) 00:08:22.726 6301.538 - 6326.745: 12.0394% ( 89) 00:08:22.726 6326.745 - 6351.951: 12.7348% ( 77) 00:08:22.726 6351.951 - 6377.157: 13.4845% ( 83) 00:08:22.726 6377.157 - 6402.363: 14.2070% ( 80) 00:08:22.726 6402.363 - 6427.569: 14.8934% ( 76) 00:08:22.726 6427.569 - 6452.775: 15.5798% ( 76) 00:08:22.726 6452.775 - 6503.188: 16.9346% ( 150) 00:08:22.726 6503.188 - 6553.600: 18.3887% ( 161) 00:08:22.726 6553.600 - 6604.012: 19.6080% ( 135) 00:08:22.726 6604.012 - 6654.425: 20.6557% ( 116) 00:08:22.726 6654.425 - 6704.837: 21.4686% ( 90) 00:08:22.726 6704.837 - 6755.249: 22.2092% ( 82) 00:08:22.726 6755.249 - 6805.662: 22.9137% ( 78) 00:08:22.726 6805.662 - 6856.074: 23.5278% ( 68) 00:08:22.726 6856.074 - 6906.486: 24.1962% ( 74) 00:08:22.726 6906.486 - 6956.898: 24.8374% ( 71) 00:08:22.726 6956.898 - 7007.311: 25.5058% ( 74) 00:08:22.726 7007.311 - 7057.723: 26.1922% ( 76) 00:08:22.726 7057.723 - 7108.135: 26.8335% ( 71) 00:08:22.726 7108.135 - 7158.548: 27.4115% ( 64) 00:08:22.726 7158.548 - 7208.960: 27.7457% ( 37) 00:08:22.726 7208.960 - 7259.372: 28.0076% ( 29) 00:08:22.726 7259.372 - 7309.785: 28.1521% ( 16) 00:08:22.726 7309.785 - 7360.197: 28.2695% ( 13) 00:08:22.726 7360.197 - 7410.609: 28.3869% ( 13) 00:08:22.726 7410.609 - 7461.022: 28.5043% ( 13) 00:08:22.726 7461.022 - 7511.434: 28.6308% ( 14) 00:08:22.726 7511.434 - 7561.846: 28.7663% ( 15) 00:08:22.726 7561.846 - 7612.258: 28.9017% ( 15) 00:08:22.726 7612.258 - 7662.671: 29.0553% ( 17) 00:08:22.726 7662.671 - 7713.083: 29.2088% ( 17) 00:08:22.726 7713.083 - 7763.495: 29.3804% ( 19) 00:08:22.726 7763.495 - 7813.908: 29.5430% ( 18) 00:08:22.726 7813.908 - 7864.320: 29.7146% ( 19) 00:08:22.726 7864.320 - 7914.732: 29.8320% ( 13) 00:08:22.726 7914.732 - 7965.145: 29.9946% ( 18) 00:08:22.726 7965.145 - 8015.557: 30.1391% ( 16) 00:08:22.726 8015.557 - 8065.969: 30.3107% ( 19) 00:08:22.726 8065.969 - 8116.382: 30.4823% ( 19) 00:08:22.726 8116.382 - 8166.794: 30.6629% ( 20) 00:08:22.726 8166.794 - 8217.206: 30.8165% ( 17) 00:08:22.726 8217.206 - 8267.618: 31.0242% ( 23) 00:08:22.726 8267.618 - 8318.031: 31.1958% ( 19) 00:08:22.726 8318.031 - 8368.443: 31.3764% ( 20) 00:08:22.726 8368.443 - 8418.855: 31.5029% ( 14) 00:08:22.726 8418.855 - 8469.268: 31.6293% ( 14) 00:08:22.726 8469.268 - 8519.680: 31.7467% ( 13) 00:08:22.726 8519.680 - 8570.092: 31.8732% ( 14) 00:08:22.726 8570.092 - 8620.505: 31.9996% ( 14) 00:08:22.726 8620.505 - 8670.917: 32.0900% ( 10) 00:08:22.726 8670.917 - 8721.329: 32.1983% ( 12) 00:08:22.726 8721.329 - 8771.742: 32.2616% ( 7) 00:08:22.726 8771.742 - 8822.154: 32.3609% ( 11) 00:08:22.726 8822.154 - 8872.566: 32.4603% ( 11) 00:08:22.726 8872.566 - 8922.978: 32.5506% ( 10) 00:08:22.726 8922.978 - 8973.391: 32.6499% ( 11) 00:08:22.726 8973.391 - 9023.803: 32.8035% ( 17) 00:08:22.726 9023.803 - 9074.215: 32.9209% ( 13) 00:08:22.726 9074.215 - 9124.628: 33.0654% ( 16) 00:08:22.726 9124.628 - 9175.040: 33.2009% ( 15) 00:08:22.726 9175.040 - 9225.452: 33.3273% ( 14) 00:08:22.726 9225.452 - 9275.865: 33.4628% ( 15) 00:08:22.726 9275.865 - 9326.277: 33.5983% ( 15) 00:08:22.726 9326.277 - 9376.689: 33.7518% ( 17) 00:08:22.726 9376.689 - 9427.102: 33.8873% ( 15) 00:08:22.726 9427.102 - 9477.514: 34.0228% ( 15) 00:08:22.726 9477.514 - 9527.926: 34.1402% ( 13) 00:08:22.726 9527.926 - 9578.338: 34.2757% ( 15) 00:08:22.726 9578.338 - 9628.751: 34.3931% ( 13) 00:08:22.726 9628.751 - 9679.163: 34.5195% ( 14) 00:08:22.726 9679.163 - 9729.575: 34.6279% ( 12) 00:08:22.726 9729.575 - 9779.988: 34.7453% ( 13) 00:08:22.726 9779.988 - 9830.400: 34.8717% ( 14) 00:08:22.726 9830.400 - 9880.812: 34.9621% ( 10) 00:08:22.726 9880.812 - 9931.225: 35.0343% ( 8) 00:08:22.726 9931.225 - 9981.637: 35.1517% ( 13) 00:08:22.726 9981.637 - 10032.049: 35.3233% ( 19) 00:08:22.726 10032.049 - 10082.462: 35.5220% ( 22) 00:08:22.726 10082.462 - 10132.874: 35.7749% ( 28) 00:08:22.726 10132.874 - 10183.286: 36.0549% ( 31) 00:08:22.726 10183.286 - 10233.698: 36.2897% ( 26) 00:08:22.726 10233.698 - 10284.111: 36.6510% ( 40) 00:08:22.726 10284.111 - 10334.523: 37.0033% ( 39) 00:08:22.726 10334.523 - 10384.935: 37.3284% ( 36) 00:08:22.726 10384.935 - 10435.348: 37.6806% ( 39) 00:08:22.726 10435.348 - 10485.760: 38.1232% ( 49) 00:08:22.726 10485.760 - 10536.172: 38.6019% ( 53) 00:08:22.726 10536.172 - 10586.585: 39.1348% ( 59) 00:08:22.726 10586.585 - 10636.997: 39.7760% ( 71) 00:08:22.726 10636.997 - 10687.409: 40.2999% ( 58) 00:08:22.726 10687.409 - 10737.822: 40.9501% ( 72) 00:08:22.726 10737.822 - 10788.234: 41.5733% ( 69) 00:08:22.726 10788.234 - 10838.646: 42.2236% ( 72) 00:08:22.726 10838.646 - 10889.058: 42.9191% ( 77) 00:08:22.726 10889.058 - 10939.471: 43.5694% ( 72) 00:08:22.726 10939.471 - 10989.883: 44.3371% ( 85) 00:08:22.726 10989.883 - 11040.295: 45.0596% ( 80) 00:08:22.726 11040.295 - 11090.708: 45.8725% ( 90) 00:08:22.726 11090.708 - 11141.120: 46.6040% ( 81) 00:08:22.726 11141.120 - 11191.532: 47.3085% ( 78) 00:08:22.726 11191.532 - 11241.945: 47.9227% ( 68) 00:08:22.726 11241.945 - 11292.357: 48.5639% ( 71) 00:08:22.726 11292.357 - 11342.769: 49.1239% ( 62) 00:08:22.726 11342.769 - 11393.182: 49.7471% ( 69) 00:08:22.726 11393.182 - 11443.594: 50.3161% ( 63) 00:08:22.726 11443.594 - 11494.006: 50.8219% ( 56) 00:08:22.726 11494.006 - 11544.418: 51.2825% ( 51) 00:08:22.726 11544.418 - 11594.831: 51.6980% ( 46) 00:08:22.726 11594.831 - 11645.243: 52.0592% ( 40) 00:08:22.726 11645.243 - 11695.655: 52.4296% ( 41) 00:08:22.726 11695.655 - 11746.068: 52.7637% ( 37) 00:08:22.726 11746.068 - 11796.480: 53.0708% ( 34) 00:08:22.726 11796.480 - 11846.892: 53.3237% ( 28) 00:08:22.726 11846.892 - 11897.305: 53.6127% ( 32) 00:08:22.726 11897.305 - 11947.717: 53.8566% ( 27) 00:08:22.726 11947.717 - 11998.129: 54.1095% ( 28) 00:08:22.726 11998.129 - 12048.542: 54.3533% ( 27) 00:08:22.726 12048.542 - 12098.954: 54.5882% ( 26) 00:08:22.726 12098.954 - 12149.366: 54.7598% ( 19) 00:08:22.726 12149.366 - 12199.778: 54.9223% ( 18) 00:08:22.726 12199.778 - 12250.191: 55.0759% ( 17) 00:08:22.726 12250.191 - 12300.603: 55.2113% ( 15) 00:08:22.726 12300.603 - 12351.015: 55.3378% ( 14) 00:08:22.726 12351.015 - 12401.428: 55.4823% ( 16) 00:08:22.726 12401.428 - 12451.840: 55.6087% ( 14) 00:08:22.726 12451.840 - 12502.252: 55.7352% ( 14) 00:08:22.726 12502.252 - 12552.665: 55.8707% ( 15) 00:08:22.726 12552.665 - 12603.077: 55.9971% ( 14) 00:08:22.726 12603.077 - 12653.489: 56.1416% ( 16) 00:08:22.726 12653.489 - 12703.902: 56.2771% ( 15) 00:08:22.726 12703.902 - 12754.314: 56.3674% ( 10) 00:08:22.727 12754.314 - 12804.726: 56.5029% ( 15) 00:08:22.727 12804.726 - 12855.138: 56.6474% ( 16) 00:08:22.727 12855.138 - 12905.551: 56.8100% ( 18) 00:08:22.727 12905.551 - 13006.375: 57.0358% ( 25) 00:08:22.727 13006.375 - 13107.200: 57.2525% ( 24) 00:08:22.727 13107.200 - 13208.025: 57.5325% ( 31) 00:08:22.727 13208.025 - 13308.849: 58.1015% ( 63) 00:08:22.727 13308.849 - 13409.674: 58.8602% ( 84) 00:08:22.727 13409.674 - 13510.498: 59.6460% ( 87) 00:08:22.727 13510.498 - 13611.323: 60.6665% ( 113) 00:08:22.727 13611.323 - 13712.148: 61.9491% ( 142) 00:08:22.727 13712.148 - 13812.972: 63.3309% ( 153) 00:08:22.727 13812.972 - 13913.797: 64.6676% ( 148) 00:08:22.727 13913.797 - 14014.622: 66.3656% ( 188) 00:08:22.727 14014.622 - 14115.446: 67.9371% ( 174) 00:08:22.727 14115.446 - 14216.271: 69.6622% ( 191) 00:08:22.727 14216.271 - 14317.095: 71.1886% ( 169) 00:08:22.727 14317.095 - 14417.920: 72.5975% ( 156) 00:08:22.727 14417.920 - 14518.745: 73.9884% ( 154) 00:08:22.727 14518.745 - 14619.569: 75.1626% ( 130) 00:08:22.727 14619.569 - 14720.394: 76.3638% ( 133) 00:08:22.727 14720.394 - 14821.218: 77.4115% ( 116) 00:08:22.727 14821.218 - 14922.043: 78.3689% ( 106) 00:08:22.727 14922.043 - 15022.868: 79.2811% ( 101) 00:08:22.727 15022.868 - 15123.692: 80.2384% ( 106) 00:08:22.727 15123.692 - 15224.517: 81.2590% ( 113) 00:08:22.727 15224.517 - 15325.342: 82.2525% ( 110) 00:08:22.727 15325.342 - 15426.166: 83.2370% ( 109) 00:08:22.727 15426.166 - 15526.991: 84.0860% ( 94) 00:08:22.727 15526.991 - 15627.815: 84.8898% ( 89) 00:08:22.727 15627.815 - 15728.640: 85.6665% ( 86) 00:08:22.727 15728.640 - 15829.465: 86.5155% ( 94) 00:08:22.727 15829.465 - 15930.289: 87.2200% ( 78) 00:08:22.727 15930.289 - 16031.114: 87.7439% ( 58) 00:08:22.727 16031.114 - 16131.938: 88.1954% ( 50) 00:08:22.727 16131.938 - 16232.763: 88.6109% ( 46) 00:08:22.727 16232.763 - 16333.588: 89.0806% ( 52) 00:08:22.727 16333.588 - 16434.412: 89.6044% ( 58) 00:08:22.727 16434.412 - 16535.237: 90.2276% ( 69) 00:08:22.727 16535.237 - 16636.062: 90.8779% ( 72) 00:08:22.727 16636.062 - 16736.886: 91.4469% ( 63) 00:08:22.727 16736.886 - 16837.711: 91.9978% ( 61) 00:08:22.727 16837.711 - 16938.535: 92.4585% ( 51) 00:08:22.727 16938.535 - 17039.360: 92.9100% ( 50) 00:08:22.727 17039.360 - 17140.185: 93.3707% ( 51) 00:08:22.727 17140.185 - 17241.009: 93.7861% ( 46) 00:08:22.727 17241.009 - 17341.834: 94.2197% ( 48) 00:08:22.727 17341.834 - 17442.658: 94.5719% ( 39) 00:08:22.727 17442.658 - 17543.483: 94.8428% ( 30) 00:08:22.727 17543.483 - 17644.308: 95.0325% ( 21) 00:08:22.727 17644.308 - 17745.132: 95.1861% ( 17) 00:08:22.727 17745.132 - 17845.957: 95.3667% ( 20) 00:08:22.727 17845.957 - 17946.782: 95.5654% ( 22) 00:08:22.727 17946.782 - 18047.606: 95.8454% ( 31) 00:08:22.727 18047.606 - 18148.431: 96.0621% ( 24) 00:08:22.727 18148.431 - 18249.255: 96.2879% ( 25) 00:08:22.727 18249.255 - 18350.080: 96.4505% ( 18) 00:08:22.727 18350.080 - 18450.905: 96.6131% ( 18) 00:08:22.727 18450.905 - 18551.729: 96.7757% ( 18) 00:08:22.727 18551.729 - 18652.554: 96.9111% ( 15) 00:08:22.727 18652.554 - 18753.378: 97.0647% ( 17) 00:08:22.727 18753.378 - 18854.203: 97.2001% ( 15) 00:08:22.727 18854.203 - 18955.028: 97.3176% ( 13) 00:08:22.727 18955.028 - 19055.852: 97.3808% ( 7) 00:08:22.727 19055.852 - 19156.677: 97.4259% ( 5) 00:08:22.727 19156.677 - 19257.502: 97.4530% ( 3) 00:08:22.727 19257.502 - 19358.326: 97.4801% ( 3) 00:08:22.727 19358.326 - 19459.151: 97.5163% ( 4) 00:08:22.727 19459.151 - 19559.975: 97.5885% ( 8) 00:08:22.727 19559.975 - 19660.800: 97.6427% ( 6) 00:08:22.727 19660.800 - 19761.625: 97.7511% ( 12) 00:08:22.727 19761.625 - 19862.449: 97.8504% ( 11) 00:08:22.727 19862.449 - 19963.274: 97.9317% ( 9) 00:08:22.727 19963.274 - 20064.098: 98.0311% ( 11) 00:08:22.727 20064.098 - 20164.923: 98.1214% ( 10) 00:08:22.727 20164.923 - 20265.748: 98.1936% ( 8) 00:08:22.727 20265.748 - 20366.572: 98.2478% ( 6) 00:08:22.727 20366.572 - 20467.397: 98.3111% ( 7) 00:08:22.727 20467.397 - 20568.222: 98.3743% ( 7) 00:08:22.727 20568.222 - 20669.046: 98.4465% ( 8) 00:08:22.727 20669.046 - 20769.871: 98.5098% ( 7) 00:08:22.727 20769.871 - 20870.695: 98.5820% ( 8) 00:08:22.727 20870.695 - 20971.520: 98.6452% ( 7) 00:08:22.727 20971.520 - 21072.345: 98.7085% ( 7) 00:08:22.727 21072.345 - 21173.169: 98.7626% ( 6) 00:08:22.727 21173.169 - 21273.994: 98.7988% ( 4) 00:08:22.727 21273.994 - 21374.818: 98.8349% ( 4) 00:08:22.727 21374.818 - 21475.643: 98.8439% ( 1) 00:08:22.727 23290.486 - 23391.311: 98.8530% ( 1) 00:08:22.727 23391.311 - 23492.135: 98.8891% ( 4) 00:08:22.727 23492.135 - 23592.960: 98.9252% ( 4) 00:08:22.727 23592.960 - 23693.785: 98.9613% ( 4) 00:08:22.727 23693.785 - 23794.609: 98.9975% ( 4) 00:08:22.727 23794.609 - 23895.434: 99.0246% ( 3) 00:08:22.727 23895.434 - 23996.258: 99.0607% ( 4) 00:08:22.727 23996.258 - 24097.083: 99.0968% ( 4) 00:08:22.727 24097.083 - 24197.908: 99.1239% ( 3) 00:08:22.727 24197.908 - 24298.732: 99.1600% ( 4) 00:08:22.727 24298.732 - 24399.557: 99.1871% ( 3) 00:08:22.727 24399.557 - 24500.382: 99.2233% ( 4) 00:08:22.727 24500.382 - 24601.206: 99.2504% ( 3) 00:08:22.727 24601.206 - 24702.031: 99.2775% ( 3) 00:08:22.727 24702.031 - 24802.855: 99.3136% ( 4) 00:08:22.727 24802.855 - 24903.680: 99.3407% ( 3) 00:08:22.727 24903.680 - 25004.505: 99.3768% ( 4) 00:08:22.727 25004.505 - 25105.329: 99.4129% ( 4) 00:08:22.727 25105.329 - 25206.154: 99.4220% ( 1) 00:08:22.727 31658.929 - 31860.578: 99.4581% ( 4) 00:08:22.727 31860.578 - 32062.228: 99.5394% ( 9) 00:08:22.727 32062.228 - 32263.877: 99.6297% ( 10) 00:08:22.727 32263.877 - 32465.526: 99.7200% ( 10) 00:08:22.727 32465.526 - 32667.175: 99.8103% ( 10) 00:08:22.727 32667.175 - 32868.825: 99.9007% ( 10) 00:08:22.727 32868.825 - 33070.474: 100.0000% ( 11) 00:08:22.727 00:08:22.727 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:22.727 ============================================================================== 00:08:22.727 Range in us Cumulative IO count 00:08:22.727 3201.182 - 3213.785: 0.0090% ( 1) 00:08:22.727 3213.785 - 3226.388: 0.0723% ( 7) 00:08:22.727 3226.388 - 3251.594: 0.0903% ( 2) 00:08:22.727 3251.594 - 3276.800: 0.1084% ( 2) 00:08:22.727 3276.800 - 3302.006: 0.1264% ( 2) 00:08:22.727 3302.006 - 3327.212: 0.1535% ( 3) 00:08:22.727 3327.212 - 3352.418: 0.1716% ( 2) 00:08:22.727 3352.418 - 3377.625: 0.1987% ( 3) 00:08:22.727 3377.625 - 3402.831: 0.2258% ( 3) 00:08:22.727 3402.831 - 3428.037: 0.2529% ( 3) 00:08:22.727 3428.037 - 3453.243: 0.2800% ( 3) 00:08:22.727 3453.243 - 3478.449: 0.3071% ( 3) 00:08:22.727 3478.449 - 3503.655: 0.3251% ( 2) 00:08:22.727 3503.655 - 3528.862: 0.3522% ( 3) 00:08:22.727 3528.862 - 3554.068: 0.3793% ( 3) 00:08:22.727 3554.068 - 3579.274: 0.4064% ( 3) 00:08:22.727 3579.274 - 3604.480: 0.4335% ( 3) 00:08:22.727 3604.480 - 3629.686: 0.4606% ( 3) 00:08:22.727 3629.686 - 3654.892: 0.4877% ( 3) 00:08:22.727 3654.892 - 3680.098: 0.5058% ( 2) 00:08:22.727 3680.098 - 3705.305: 0.5329% ( 3) 00:08:22.727 3705.305 - 3730.511: 0.5600% ( 3) 00:08:22.727 3730.511 - 3755.717: 0.5780% ( 2) 00:08:22.727 4915.200 - 4940.406: 0.5961% ( 2) 00:08:22.727 4940.406 - 4965.612: 0.6232% ( 3) 00:08:22.727 4965.612 - 4990.818: 0.6413% ( 2) 00:08:22.727 4990.818 - 5016.025: 0.6684% ( 3) 00:08:22.727 5016.025 - 5041.231: 0.6954% ( 3) 00:08:22.727 5041.231 - 5066.437: 0.7135% ( 2) 00:08:22.727 5066.437 - 5091.643: 0.7316% ( 2) 00:08:22.727 5091.643 - 5116.849: 0.7587% ( 3) 00:08:22.727 5116.849 - 5142.055: 0.7767% ( 2) 00:08:22.727 5142.055 - 5167.262: 0.8038% ( 3) 00:08:22.727 5167.262 - 5192.468: 0.8219% ( 2) 00:08:22.727 5192.468 - 5217.674: 0.8490% ( 3) 00:08:22.727 5217.674 - 5242.880: 0.8671% ( 2) 00:08:22.727 5242.880 - 5268.086: 0.8941% ( 3) 00:08:22.727 5268.086 - 5293.292: 0.9212% ( 3) 00:08:22.727 5293.292 - 5318.498: 0.9393% ( 2) 00:08:22.727 5318.498 - 5343.705: 0.9574% ( 2) 00:08:22.727 5343.705 - 5368.911: 0.9845% ( 3) 00:08:22.727 5368.911 - 5394.117: 1.0025% ( 2) 00:08:22.727 5394.117 - 5419.323: 1.0296% ( 3) 00:08:22.727 5419.323 - 5444.529: 1.0477% ( 2) 00:08:22.727 5444.529 - 5469.735: 1.0748% ( 3) 00:08:22.727 5469.735 - 5494.942: 1.0928% ( 2) 00:08:22.727 5494.942 - 5520.148: 1.1199% ( 3) 00:08:22.727 5520.148 - 5545.354: 1.1470% ( 3) 00:08:22.727 5545.354 - 5570.560: 1.1561% ( 1) 00:08:22.727 5646.178 - 5671.385: 1.2012% ( 5) 00:08:22.727 5671.385 - 5696.591: 1.3006% ( 11) 00:08:22.727 5696.591 - 5721.797: 1.4180% ( 13) 00:08:22.727 5721.797 - 5747.003: 1.5625% ( 16) 00:08:22.727 5747.003 - 5772.209: 1.7612% ( 22) 00:08:22.727 5772.209 - 5797.415: 2.0231% ( 29) 00:08:22.727 5797.415 - 5822.622: 2.3934% ( 41) 00:08:22.727 5822.622 - 5847.828: 2.7818% ( 43) 00:08:22.727 5847.828 - 5873.034: 3.1882% ( 45) 00:08:22.727 5873.034 - 5898.240: 3.5314% ( 38) 00:08:22.727 5898.240 - 5923.446: 3.9017% ( 41) 00:08:22.727 5923.446 - 5948.652: 4.2630% ( 40) 00:08:22.727 5948.652 - 5973.858: 4.6965% ( 48) 00:08:22.727 5973.858 - 5999.065: 5.1030% ( 45) 00:08:22.727 5999.065 - 6024.271: 5.4913% ( 43) 00:08:22.727 6024.271 - 6049.477: 6.0603% ( 63) 00:08:22.727 6049.477 - 6074.683: 6.5300% ( 52) 00:08:22.727 6074.683 - 6099.889: 7.0177% ( 54) 00:08:22.727 6099.889 - 6125.095: 7.5145% ( 55) 00:08:22.727 6125.095 - 6150.302: 7.9660% ( 50) 00:08:22.727 6150.302 - 6175.508: 8.4447% ( 53) 00:08:22.727 6175.508 - 6200.714: 8.8421% ( 44) 00:08:22.727 6200.714 - 6225.920: 9.3208% ( 53) 00:08:22.727 6225.920 - 6251.126: 9.9350% ( 68) 00:08:22.727 6251.126 - 6276.332: 10.6846% ( 83) 00:08:22.727 6276.332 - 6301.538: 11.3710% ( 76) 00:08:22.728 6301.538 - 6326.745: 12.1749% ( 89) 00:08:22.728 6326.745 - 6351.951: 12.9335% ( 84) 00:08:22.728 6351.951 - 6377.157: 13.7645% ( 92) 00:08:22.728 6377.157 - 6402.363: 14.5141% ( 83) 00:08:22.728 6402.363 - 6427.569: 15.2366% ( 80) 00:08:22.728 6427.569 - 6452.775: 15.9321% ( 77) 00:08:22.728 6452.775 - 6503.188: 17.2778% ( 149) 00:08:22.728 6503.188 - 6553.600: 18.6236% ( 149) 00:08:22.728 6553.600 - 6604.012: 19.8519% ( 136) 00:08:22.728 6604.012 - 6654.425: 20.8725% ( 113) 00:08:22.728 6654.425 - 6704.837: 21.6673% ( 88) 00:08:22.728 6704.837 - 6755.249: 22.4440% ( 86) 00:08:22.728 6755.249 - 6805.662: 23.0762% ( 70) 00:08:22.728 6805.662 - 6856.074: 23.7626% ( 76) 00:08:22.728 6856.074 - 6906.486: 24.4039% ( 71) 00:08:22.728 6906.486 - 6956.898: 25.0090% ( 67) 00:08:22.728 6956.898 - 7007.311: 25.6593% ( 72) 00:08:22.728 7007.311 - 7057.723: 26.2464% ( 65) 00:08:22.728 7057.723 - 7108.135: 26.9057% ( 73) 00:08:22.728 7108.135 - 7158.548: 27.4566% ( 61) 00:08:22.728 7158.548 - 7208.960: 27.7637% ( 34) 00:08:22.728 7208.960 - 7259.372: 27.9082% ( 16) 00:08:22.728 7259.372 - 7309.785: 28.0257% ( 13) 00:08:22.728 7309.785 - 7360.197: 28.1431% ( 13) 00:08:22.728 7360.197 - 7410.609: 28.2605% ( 13) 00:08:22.728 7410.609 - 7461.022: 28.4230% ( 18) 00:08:22.728 7461.022 - 7511.434: 28.6127% ( 21) 00:08:22.728 7511.434 - 7561.846: 28.7753% ( 18) 00:08:22.728 7561.846 - 7612.258: 28.9379% ( 18) 00:08:22.728 7612.258 - 7662.671: 29.0824% ( 16) 00:08:22.728 7662.671 - 7713.083: 29.2449% ( 18) 00:08:22.728 7713.083 - 7763.495: 29.4075% ( 18) 00:08:22.728 7763.495 - 7813.908: 29.6152% ( 23) 00:08:22.728 7813.908 - 7864.320: 29.7688% ( 17) 00:08:22.728 7864.320 - 7914.732: 29.9314% ( 18) 00:08:22.728 7914.732 - 7965.145: 30.1030% ( 19) 00:08:22.728 7965.145 - 8015.557: 30.3920% ( 32) 00:08:22.728 8015.557 - 8065.969: 30.6268% ( 26) 00:08:22.728 8065.969 - 8116.382: 30.8074% ( 20) 00:08:22.728 8116.382 - 8166.794: 31.0152% ( 23) 00:08:22.728 8166.794 - 8217.206: 31.2319% ( 24) 00:08:22.728 8217.206 - 8267.618: 31.4035% ( 19) 00:08:22.728 8267.618 - 8318.031: 31.5661% ( 18) 00:08:22.728 8318.031 - 8368.443: 31.7016% ( 15) 00:08:22.728 8368.443 - 8418.855: 31.8280% ( 14) 00:08:22.728 8418.855 - 8469.268: 31.9725% ( 16) 00:08:22.728 8469.268 - 8519.680: 32.0900% ( 13) 00:08:22.728 8519.680 - 8570.092: 32.1983% ( 12) 00:08:22.728 8570.092 - 8620.505: 32.3067% ( 12) 00:08:22.728 8620.505 - 8670.917: 32.4151% ( 12) 00:08:22.728 8670.917 - 8721.329: 32.5325% ( 13) 00:08:22.728 8721.329 - 8771.742: 32.6138% ( 9) 00:08:22.728 8771.742 - 8822.154: 32.6770% ( 7) 00:08:22.728 8822.154 - 8872.566: 32.7222% ( 5) 00:08:22.728 8872.566 - 8922.978: 32.7673% ( 5) 00:08:22.728 8922.978 - 8973.391: 32.8396% ( 8) 00:08:22.728 8973.391 - 9023.803: 32.9299% ( 10) 00:08:22.728 9023.803 - 9074.215: 33.0744% ( 16) 00:08:22.728 9074.215 - 9124.628: 33.2009% ( 14) 00:08:22.728 9124.628 - 9175.040: 33.2822% ( 9) 00:08:22.728 9175.040 - 9225.452: 33.3996% ( 13) 00:08:22.728 9225.452 - 9275.865: 33.5350% ( 15) 00:08:22.728 9275.865 - 9326.277: 33.6525% ( 13) 00:08:22.728 9326.277 - 9376.689: 33.7789% ( 14) 00:08:22.728 9376.689 - 9427.102: 33.9234% ( 16) 00:08:22.728 9427.102 - 9477.514: 34.1311% ( 23) 00:08:22.728 9477.514 - 9527.926: 34.2937% ( 18) 00:08:22.728 9527.926 - 9578.338: 34.4202% ( 14) 00:08:22.728 9578.338 - 9628.751: 34.5556% ( 15) 00:08:22.728 9628.751 - 9679.163: 34.7092% ( 17) 00:08:22.728 9679.163 - 9729.575: 34.8537% ( 16) 00:08:22.728 9729.575 - 9779.988: 34.9892% ( 15) 00:08:22.728 9779.988 - 9830.400: 35.1337% ( 16) 00:08:22.728 9830.400 - 9880.812: 35.2421% ( 12) 00:08:22.728 9880.812 - 9931.225: 35.3595% ( 13) 00:08:22.728 9931.225 - 9981.637: 35.4498% ( 10) 00:08:22.728 9981.637 - 10032.049: 35.5762% ( 14) 00:08:22.728 10032.049 - 10082.462: 35.7207% ( 16) 00:08:22.728 10082.462 - 10132.874: 35.9736% ( 28) 00:08:22.728 10132.874 - 10183.286: 36.1994% ( 25) 00:08:22.728 10183.286 - 10233.698: 36.4433% ( 27) 00:08:22.728 10233.698 - 10284.111: 36.7052% ( 29) 00:08:22.728 10284.111 - 10334.523: 36.9852% ( 31) 00:08:22.728 10334.523 - 10384.935: 37.3555% ( 41) 00:08:22.728 10384.935 - 10435.348: 37.8161% ( 51) 00:08:22.728 10435.348 - 10485.760: 38.3129% ( 55) 00:08:22.728 10485.760 - 10536.172: 38.7735% ( 51) 00:08:22.728 10536.172 - 10586.585: 39.2431% ( 52) 00:08:22.728 10586.585 - 10636.997: 39.7670% ( 58) 00:08:22.728 10636.997 - 10687.409: 40.2818% ( 57) 00:08:22.728 10687.409 - 10737.822: 40.8327% ( 61) 00:08:22.728 10737.822 - 10788.234: 41.4288% ( 66) 00:08:22.728 10788.234 - 10838.646: 42.0340% ( 67) 00:08:22.728 10838.646 - 10889.058: 42.6752% ( 71) 00:08:22.728 10889.058 - 10939.471: 43.3255% ( 72) 00:08:22.728 10939.471 - 10989.883: 43.9939% ( 74) 00:08:22.728 10989.883 - 11040.295: 44.6893% ( 77) 00:08:22.728 11040.295 - 11090.708: 45.3396% ( 72) 00:08:22.728 11090.708 - 11141.120: 45.9899% ( 72) 00:08:22.728 11141.120 - 11191.532: 46.5047% ( 57) 00:08:22.728 11191.532 - 11241.945: 47.0647% ( 62) 00:08:22.728 11241.945 - 11292.357: 47.6517% ( 65) 00:08:22.728 11292.357 - 11342.769: 48.2298% ( 64) 00:08:22.728 11342.769 - 11393.182: 48.7265% ( 55) 00:08:22.728 11393.182 - 11443.594: 49.1149% ( 43) 00:08:22.728 11443.594 - 11494.006: 49.4762% ( 40) 00:08:22.728 11494.006 - 11544.418: 49.8555% ( 42) 00:08:22.728 11544.418 - 11594.831: 50.2800% ( 47) 00:08:22.728 11594.831 - 11645.243: 50.6864% ( 45) 00:08:22.728 11645.243 - 11695.655: 51.0928% ( 45) 00:08:22.728 11695.655 - 11746.068: 51.4541% ( 40) 00:08:22.728 11746.068 - 11796.480: 51.8335% ( 42) 00:08:22.728 11796.480 - 11846.892: 52.1676% ( 37) 00:08:22.728 11846.892 - 11897.305: 52.5018% ( 37) 00:08:22.728 11897.305 - 11947.717: 52.8811% ( 42) 00:08:22.728 11947.717 - 11998.129: 53.2063% ( 36) 00:08:22.728 11998.129 - 12048.542: 53.5585% ( 39) 00:08:22.728 12048.542 - 12098.954: 53.9017% ( 38) 00:08:22.728 12098.954 - 12149.366: 54.2088% ( 34) 00:08:22.728 12149.366 - 12199.778: 54.4978% ( 32) 00:08:22.728 12199.778 - 12250.191: 54.7868% ( 32) 00:08:22.728 12250.191 - 12300.603: 55.0307% ( 27) 00:08:22.728 12300.603 - 12351.015: 55.3197% ( 32) 00:08:22.728 12351.015 - 12401.428: 55.5726% ( 28) 00:08:22.728 12401.428 - 12451.840: 55.8436% ( 30) 00:08:22.728 12451.840 - 12502.252: 56.0874% ( 27) 00:08:22.728 12502.252 - 12552.665: 56.3042% ( 24) 00:08:22.728 12552.665 - 12603.077: 56.4577% ( 17) 00:08:22.728 12603.077 - 12653.489: 56.5751% ( 13) 00:08:22.728 12653.489 - 12703.902: 56.7016% ( 14) 00:08:22.728 12703.902 - 12754.314: 56.8551% ( 17) 00:08:22.728 12754.314 - 12804.726: 56.9906% ( 15) 00:08:22.728 12804.726 - 12855.138: 57.1622% ( 19) 00:08:22.728 12855.138 - 12905.551: 57.2796% ( 13) 00:08:22.728 12905.551 - 13006.375: 57.5054% ( 25) 00:08:22.728 13006.375 - 13107.200: 57.8215% ( 35) 00:08:22.728 13107.200 - 13208.025: 58.3183% ( 55) 00:08:22.728 13208.025 - 13308.849: 58.9234% ( 67) 00:08:22.728 13308.849 - 13409.674: 59.5556% ( 70) 00:08:22.728 13409.674 - 13510.498: 60.4588% ( 100) 00:08:22.728 13510.498 - 13611.323: 61.5155% ( 117) 00:08:22.728 13611.323 - 13712.148: 62.6084% ( 121) 00:08:22.728 13712.148 - 13812.972: 63.8186% ( 134) 00:08:22.728 13812.972 - 13913.797: 65.1102% ( 143) 00:08:22.728 13913.797 - 14014.622: 66.6185% ( 167) 00:08:22.728 14014.622 - 14115.446: 67.9913% ( 152) 00:08:22.728 14115.446 - 14216.271: 69.5087% ( 168) 00:08:22.728 14216.271 - 14317.095: 70.8544% ( 149) 00:08:22.728 14317.095 - 14417.920: 72.2814% ( 158) 00:08:22.728 14417.920 - 14518.745: 73.6362% ( 150) 00:08:22.728 14518.745 - 14619.569: 74.8374% ( 133) 00:08:22.728 14619.569 - 14720.394: 76.1561% ( 146) 00:08:22.728 14720.394 - 14821.218: 77.5560% ( 155) 00:08:22.728 14821.218 - 14922.043: 78.8204% ( 140) 00:08:22.728 14922.043 - 15022.868: 79.9946% ( 130) 00:08:22.728 15022.868 - 15123.692: 81.0513% ( 117) 00:08:22.728 15123.692 - 15224.517: 82.0629% ( 112) 00:08:22.728 15224.517 - 15325.342: 82.9480% ( 98) 00:08:22.728 15325.342 - 15426.166: 83.9415% ( 110) 00:08:22.728 15426.166 - 15526.991: 84.8537% ( 101) 00:08:22.728 15526.991 - 15627.815: 85.8020% ( 105) 00:08:22.728 15627.815 - 15728.640: 86.6600% ( 95) 00:08:22.728 15728.640 - 15829.465: 87.2742% ( 68) 00:08:22.728 15829.465 - 15930.289: 87.7800% ( 56) 00:08:22.728 15930.289 - 16031.114: 88.2225% ( 49) 00:08:22.728 16031.114 - 16131.938: 88.6019% ( 42) 00:08:22.728 16131.938 - 16232.763: 89.0083% ( 45) 00:08:22.728 16232.763 - 16333.588: 89.4870% ( 53) 00:08:22.728 16333.588 - 16434.412: 89.9928% ( 56) 00:08:22.728 16434.412 - 16535.237: 90.4715% ( 53) 00:08:22.728 16535.237 - 16636.062: 91.0043% ( 59) 00:08:22.728 16636.062 - 16736.886: 91.4108% ( 45) 00:08:22.728 16736.886 - 16837.711: 91.8533% ( 49) 00:08:22.728 16837.711 - 16938.535: 92.3230% ( 52) 00:08:22.728 16938.535 - 17039.360: 92.8107% ( 54) 00:08:22.728 17039.360 - 17140.185: 93.1629% ( 39) 00:08:22.728 17140.185 - 17241.009: 93.4881% ( 36) 00:08:22.728 17241.009 - 17341.834: 93.7229% ( 26) 00:08:22.728 17341.834 - 17442.658: 93.9487% ( 25) 00:08:22.728 17442.658 - 17543.483: 94.2106% ( 29) 00:08:22.728 17543.483 - 17644.308: 94.4725% ( 29) 00:08:22.728 17644.308 - 17745.132: 94.7977% ( 36) 00:08:22.728 17745.132 - 17845.957: 94.9874% ( 21) 00:08:22.728 17845.957 - 17946.782: 95.1499% ( 18) 00:08:22.728 17946.782 - 18047.606: 95.3577% ( 23) 00:08:22.728 18047.606 - 18148.431: 95.5112% ( 17) 00:08:22.728 18148.431 - 18249.255: 95.6557% ( 16) 00:08:22.728 18249.255 - 18350.080: 95.8273% ( 19) 00:08:22.728 18350.080 - 18450.905: 95.9989% ( 19) 00:08:22.728 18450.905 - 18551.729: 96.1796% ( 20) 00:08:22.728 18551.729 - 18652.554: 96.3873% ( 23) 00:08:22.728 18652.554 - 18753.378: 96.5950% ( 23) 00:08:22.728 18753.378 - 18854.203: 96.7395% ( 16) 00:08:22.729 18854.203 - 18955.028: 96.9021% ( 18) 00:08:22.729 18955.028 - 19055.852: 97.0195% ( 13) 00:08:22.729 19055.852 - 19156.677: 97.1369% ( 13) 00:08:22.729 19156.677 - 19257.502: 97.2905% ( 17) 00:08:22.729 19257.502 - 19358.326: 97.4259% ( 15) 00:08:22.729 19358.326 - 19459.151: 97.5795% ( 17) 00:08:22.729 19459.151 - 19559.975: 97.7240% ( 16) 00:08:22.729 19559.975 - 19660.800: 97.8595% ( 15) 00:08:22.729 19660.800 - 19761.625: 97.9859% ( 14) 00:08:22.729 19761.625 - 19862.449: 98.1395% ( 17) 00:08:22.729 19862.449 - 19963.274: 98.2478% ( 12) 00:08:22.729 19963.274 - 20064.098: 98.3562% ( 12) 00:08:22.729 20064.098 - 20164.923: 98.4194% ( 7) 00:08:22.729 20164.923 - 20265.748: 98.4736% ( 6) 00:08:22.729 20265.748 - 20366.572: 98.5368% ( 7) 00:08:22.729 20366.572 - 20467.397: 98.5910% ( 6) 00:08:22.729 20467.397 - 20568.222: 98.6543% ( 7) 00:08:22.729 20568.222 - 20669.046: 98.7085% ( 6) 00:08:22.729 20669.046 - 20769.871: 98.7626% ( 6) 00:08:22.729 20769.871 - 20870.695: 98.7988% ( 4) 00:08:22.729 20870.695 - 20971.520: 98.8259% ( 3) 00:08:22.729 20971.520 - 21072.345: 98.8439% ( 2) 00:08:22.729 23492.135 - 23592.960: 98.8620% ( 2) 00:08:22.729 23592.960 - 23693.785: 98.8891% ( 3) 00:08:22.729 23693.785 - 23794.609: 98.9252% ( 4) 00:08:22.729 23794.609 - 23895.434: 98.9613% ( 4) 00:08:22.729 23895.434 - 23996.258: 98.9975% ( 4) 00:08:22.729 23996.258 - 24097.083: 99.0246% ( 3) 00:08:22.729 24097.083 - 24197.908: 99.0607% ( 4) 00:08:22.729 24197.908 - 24298.732: 99.0968% ( 4) 00:08:22.729 24298.732 - 24399.557: 99.1329% ( 4) 00:08:22.729 24399.557 - 24500.382: 99.1691% ( 4) 00:08:22.729 24500.382 - 24601.206: 99.1962% ( 3) 00:08:22.729 24601.206 - 24702.031: 99.2323% ( 4) 00:08:22.729 24702.031 - 24802.855: 99.2594% ( 3) 00:08:22.729 24802.855 - 24903.680: 99.2865% ( 3) 00:08:22.729 24903.680 - 25004.505: 99.3136% ( 3) 00:08:22.729 25004.505 - 25105.329: 99.3497% ( 4) 00:08:22.729 25105.329 - 25206.154: 99.3858% ( 4) 00:08:22.729 25206.154 - 25306.978: 99.4129% ( 3) 00:08:22.729 25306.978 - 25407.803: 99.4220% ( 1) 00:08:22.729 31255.631 - 31457.280: 99.4310% ( 1) 00:08:22.729 31457.280 - 31658.929: 99.5123% ( 9) 00:08:22.729 31658.929 - 31860.578: 99.6026% ( 10) 00:08:22.729 31860.578 - 32062.228: 99.7020% ( 11) 00:08:22.729 32062.228 - 32263.877: 99.8013% ( 11) 00:08:22.729 32263.877 - 32465.526: 99.8916% ( 10) 00:08:22.729 32465.526 - 32667.175: 99.9910% ( 11) 00:08:22.729 32667.175 - 32868.825: 100.0000% ( 1) 00:08:22.729 00:08:22.729 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:22.729 ============================================================================== 00:08:22.729 Range in us Cumulative IO count 00:08:22.729 2961.723 - 2974.326: 0.0269% ( 3) 00:08:22.729 2974.326 - 2986.929: 0.0449% ( 2) 00:08:22.729 2999.532 - 3012.135: 0.0539% ( 1) 00:08:22.729 3012.135 - 3024.738: 0.0718% ( 2) 00:08:22.729 3024.738 - 3037.342: 0.0808% ( 1) 00:08:22.729 3037.342 - 3049.945: 0.0898% ( 1) 00:08:22.729 3049.945 - 3062.548: 0.0988% ( 1) 00:08:22.729 3062.548 - 3075.151: 0.1167% ( 2) 00:08:22.729 3075.151 - 3087.754: 0.1257% ( 1) 00:08:22.729 3087.754 - 3100.357: 0.1437% ( 2) 00:08:22.729 3100.357 - 3112.960: 0.1616% ( 2) 00:08:22.729 3112.960 - 3125.563: 0.1706% ( 1) 00:08:22.729 3125.563 - 3138.166: 0.1886% ( 2) 00:08:22.729 3138.166 - 3150.769: 0.1976% ( 1) 00:08:22.729 3150.769 - 3163.372: 0.2065% ( 1) 00:08:22.729 3163.372 - 3175.975: 0.2245% ( 2) 00:08:22.729 3175.975 - 3188.578: 0.2425% ( 2) 00:08:22.729 3188.578 - 3201.182: 0.2604% ( 2) 00:08:22.729 3201.182 - 3213.785: 0.2694% ( 1) 00:08:22.729 3213.785 - 3226.388: 0.2874% ( 2) 00:08:22.729 3226.388 - 3251.594: 0.3053% ( 2) 00:08:22.729 3251.594 - 3276.800: 0.3323% ( 3) 00:08:22.729 3276.800 - 3302.006: 0.3592% ( 3) 00:08:22.729 3302.006 - 3327.212: 0.3861% ( 3) 00:08:22.729 3327.212 - 3352.418: 0.4131% ( 3) 00:08:22.729 3352.418 - 3377.625: 0.4400% ( 3) 00:08:22.729 3377.625 - 3402.831: 0.4670% ( 3) 00:08:22.729 3402.831 - 3428.037: 0.4849% ( 2) 00:08:22.729 3428.037 - 3453.243: 0.5119% ( 3) 00:08:22.729 3453.243 - 3478.449: 0.5388% ( 3) 00:08:22.729 3478.449 - 3503.655: 0.5657% ( 3) 00:08:22.729 3503.655 - 3528.862: 0.5747% ( 1) 00:08:22.729 4688.345 - 4713.551: 0.6017% ( 3) 00:08:22.729 4713.551 - 4738.757: 0.6735% ( 8) 00:08:22.729 4738.757 - 4763.963: 0.6915% ( 2) 00:08:22.729 4763.963 - 4789.169: 0.7094% ( 2) 00:08:22.729 4789.169 - 4814.375: 0.7184% ( 1) 00:08:22.729 4814.375 - 4839.582: 0.7453% ( 3) 00:08:22.729 4839.582 - 4864.788: 0.7633% ( 2) 00:08:22.729 4864.788 - 4889.994: 0.7812% ( 2) 00:08:22.729 4889.994 - 4915.200: 0.8082% ( 3) 00:08:22.729 4915.200 - 4940.406: 0.8261% ( 2) 00:08:22.729 4940.406 - 4965.612: 0.8351% ( 1) 00:08:22.729 4965.612 - 4990.818: 0.8531% ( 2) 00:08:22.729 4990.818 - 5016.025: 0.8710% ( 2) 00:08:22.729 5016.025 - 5041.231: 0.8890% ( 2) 00:08:22.729 5041.231 - 5066.437: 0.9159% ( 3) 00:08:22.729 5066.437 - 5091.643: 0.9429% ( 3) 00:08:22.729 5091.643 - 5116.849: 0.9608% ( 2) 00:08:22.729 5116.849 - 5142.055: 0.9878% ( 3) 00:08:22.729 5142.055 - 5167.262: 1.0057% ( 2) 00:08:22.729 5167.262 - 5192.468: 1.0327% ( 3) 00:08:22.729 5192.468 - 5217.674: 1.0596% ( 3) 00:08:22.729 5217.674 - 5242.880: 1.0776% ( 2) 00:08:22.729 5242.880 - 5268.086: 1.1045% ( 3) 00:08:22.729 5268.086 - 5293.292: 1.1225% ( 2) 00:08:22.729 5293.292 - 5318.498: 1.1404% ( 2) 00:08:22.729 5318.498 - 5343.705: 1.1494% ( 1) 00:08:22.729 5620.972 - 5646.178: 1.1584% ( 1) 00:08:22.729 5646.178 - 5671.385: 1.1853% ( 3) 00:08:22.729 5671.385 - 5696.591: 1.2572% ( 8) 00:08:22.729 5696.591 - 5721.797: 1.4009% ( 16) 00:08:22.729 5721.797 - 5747.003: 1.5266% ( 14) 00:08:22.729 5747.003 - 5772.209: 1.6703% ( 16) 00:08:22.729 5772.209 - 5797.415: 2.0205% ( 39) 00:08:22.729 5797.415 - 5822.622: 2.3797% ( 40) 00:08:22.729 5822.622 - 5847.828: 2.7389% ( 40) 00:08:22.729 5847.828 - 5873.034: 3.0172% ( 31) 00:08:22.729 5873.034 - 5898.240: 3.4213% ( 45) 00:08:22.729 5898.240 - 5923.446: 3.8434% ( 47) 00:08:22.729 5923.446 - 5948.652: 4.2116% ( 41) 00:08:22.729 5948.652 - 5973.858: 4.7504% ( 60) 00:08:22.729 5973.858 - 5999.065: 5.1904% ( 49) 00:08:22.729 5999.065 - 6024.271: 5.7381% ( 61) 00:08:22.729 6024.271 - 6049.477: 6.2141% ( 53) 00:08:22.729 6049.477 - 6074.683: 6.6002% ( 43) 00:08:22.729 6074.683 - 6099.889: 7.0582% ( 51) 00:08:22.729 6099.889 - 6125.095: 7.5341% ( 53) 00:08:22.729 6125.095 - 6150.302: 8.0190% ( 54) 00:08:22.729 6150.302 - 6175.508: 8.4591% ( 49) 00:08:22.729 6175.508 - 6200.714: 8.9978% ( 60) 00:08:22.729 6200.714 - 6225.920: 9.5277% ( 59) 00:08:22.729 6225.920 - 6251.126: 10.1473% ( 69) 00:08:22.729 6251.126 - 6276.332: 10.7489% ( 67) 00:08:22.729 6276.332 - 6301.538: 11.4494% ( 78) 00:08:22.729 6301.538 - 6326.745: 12.2396% ( 88) 00:08:22.729 6326.745 - 6351.951: 13.0388% ( 89) 00:08:22.729 6351.951 - 6377.157: 13.8111% ( 86) 00:08:22.729 6377.157 - 6402.363: 14.5474% ( 82) 00:08:22.729 6402.363 - 6427.569: 15.2478% ( 78) 00:08:22.729 6427.569 - 6452.775: 15.8944% ( 72) 00:08:22.729 6452.775 - 6503.188: 17.2593% ( 152) 00:08:22.729 6503.188 - 6553.600: 18.6333% ( 153) 00:08:22.729 6553.600 - 6604.012: 19.7917% ( 129) 00:08:22.729 6604.012 - 6654.425: 20.7974% ( 112) 00:08:22.729 6654.425 - 6704.837: 21.6505% ( 95) 00:08:22.729 6704.837 - 6755.249: 22.3509% ( 78) 00:08:22.729 6755.249 - 6805.662: 23.0693% ( 80) 00:08:22.729 6805.662 - 6856.074: 23.7069% ( 71) 00:08:22.729 6856.074 - 6906.486: 24.3445% ( 71) 00:08:22.729 6906.486 - 6956.898: 25.0269% ( 76) 00:08:22.729 6956.898 - 7007.311: 25.6555% ( 70) 00:08:22.729 7007.311 - 7057.723: 26.3380% ( 76) 00:08:22.729 7057.723 - 7108.135: 26.9666% ( 70) 00:08:22.729 7108.135 - 7158.548: 27.5413% ( 64) 00:08:22.729 7158.548 - 7208.960: 27.8556% ( 35) 00:08:22.729 7208.960 - 7259.372: 28.0442% ( 21) 00:08:22.729 7259.372 - 7309.785: 28.1968% ( 17) 00:08:22.729 7309.785 - 7360.197: 28.3315% ( 15) 00:08:22.729 7360.197 - 7410.609: 28.4393% ( 12) 00:08:22.729 7410.609 - 7461.022: 28.6279% ( 21) 00:08:22.729 7461.022 - 7511.434: 28.7716% ( 16) 00:08:22.729 7511.434 - 7561.846: 28.8703% ( 11) 00:08:22.729 7561.846 - 7612.258: 28.9871% ( 13) 00:08:22.729 7612.258 - 7662.671: 29.1038% ( 13) 00:08:22.729 7662.671 - 7713.083: 29.3103% ( 23) 00:08:22.729 7713.083 - 7763.495: 29.4181% ( 12) 00:08:22.729 7763.495 - 7813.908: 29.5259% ( 12) 00:08:22.729 7813.908 - 7864.320: 29.6516% ( 14) 00:08:22.729 7864.320 - 7914.732: 29.7953% ( 16) 00:08:22.729 7914.732 - 7965.145: 29.9120% ( 13) 00:08:22.729 7965.145 - 8015.557: 30.0377% ( 14) 00:08:22.729 8015.557 - 8065.969: 30.1545% ( 13) 00:08:22.729 8065.969 - 8116.382: 30.2981% ( 16) 00:08:22.729 8116.382 - 8166.794: 30.3969% ( 11) 00:08:22.729 8166.794 - 8217.206: 30.5316% ( 15) 00:08:22.729 8217.206 - 8267.618: 30.6932% ( 18) 00:08:22.729 8267.618 - 8318.031: 30.8818% ( 21) 00:08:22.729 8318.031 - 8368.443: 31.0345% ( 17) 00:08:22.729 8368.443 - 8418.855: 31.1692% ( 15) 00:08:22.729 8418.855 - 8469.268: 31.3039% ( 15) 00:08:22.729 8469.268 - 8519.680: 31.4116% ( 12) 00:08:22.729 8519.680 - 8570.092: 31.5194% ( 12) 00:08:22.729 8570.092 - 8620.505: 31.6272% ( 12) 00:08:22.729 8620.505 - 8670.917: 31.7529% ( 14) 00:08:22.730 8670.917 - 8721.329: 31.9055% ( 17) 00:08:22.730 8721.329 - 8771.742: 32.0672% ( 18) 00:08:22.730 8771.742 - 8822.154: 32.1929% ( 14) 00:08:22.730 8822.154 - 8872.566: 32.3276% ( 15) 00:08:22.730 8872.566 - 8922.978: 32.4443% ( 13) 00:08:22.730 8922.978 - 8973.391: 32.5521% ( 12) 00:08:22.730 8973.391 - 9023.803: 32.6778% ( 14) 00:08:22.730 9023.803 - 9074.215: 32.7856% ( 12) 00:08:22.730 9074.215 - 9124.628: 32.8664% ( 9) 00:08:22.730 9124.628 - 9175.040: 32.9652% ( 11) 00:08:22.730 9175.040 - 9225.452: 33.0550% ( 10) 00:08:22.730 9225.452 - 9275.865: 33.1448% ( 10) 00:08:22.730 9275.865 - 9326.277: 33.2884% ( 16) 00:08:22.730 9326.277 - 9376.689: 33.4591% ( 19) 00:08:22.730 9376.689 - 9427.102: 33.6656% ( 23) 00:08:22.730 9427.102 - 9477.514: 33.8272% ( 18) 00:08:22.730 9477.514 - 9527.926: 33.9889% ( 18) 00:08:22.730 9527.926 - 9578.338: 34.1236% ( 15) 00:08:22.730 9578.338 - 9628.751: 34.2493% ( 14) 00:08:22.730 9628.751 - 9679.163: 34.3391% ( 10) 00:08:22.730 9679.163 - 9729.575: 34.4468% ( 12) 00:08:22.730 9729.575 - 9779.988: 34.5456% ( 11) 00:08:22.730 9779.988 - 9830.400: 34.5995% ( 6) 00:08:22.730 9830.400 - 9880.812: 34.6624% ( 7) 00:08:22.730 9880.812 - 9931.225: 34.7432% ( 9) 00:08:22.730 9931.225 - 9981.637: 34.8240% ( 9) 00:08:22.730 9981.637 - 10032.049: 34.9048% ( 9) 00:08:22.730 10032.049 - 10082.462: 35.0844% ( 20) 00:08:22.730 10082.462 - 10132.874: 35.2640% ( 20) 00:08:22.730 10132.874 - 10183.286: 35.4705% ( 23) 00:08:22.730 10183.286 - 10233.698: 35.7040% ( 26) 00:08:22.730 10233.698 - 10284.111: 35.9824% ( 31) 00:08:22.730 10284.111 - 10334.523: 36.3326% ( 39) 00:08:22.730 10334.523 - 10384.935: 36.7996% ( 52) 00:08:22.730 10384.935 - 10435.348: 37.2665% ( 52) 00:08:22.730 10435.348 - 10485.760: 37.6976% ( 48) 00:08:22.730 10485.760 - 10536.172: 38.2364% ( 60) 00:08:22.730 10536.172 - 10586.585: 38.8919% ( 73) 00:08:22.730 10586.585 - 10636.997: 39.4666% ( 64) 00:08:22.730 10636.997 - 10687.409: 40.1580% ( 77) 00:08:22.730 10687.409 - 10737.822: 40.9034% ( 83) 00:08:22.730 10737.822 - 10788.234: 41.6038% ( 78) 00:08:22.730 10788.234 - 10838.646: 42.3222% ( 80) 00:08:22.730 10838.646 - 10889.058: 43.0136% ( 77) 00:08:22.730 10889.058 - 10939.471: 43.6692% ( 73) 00:08:22.730 10939.471 - 10989.883: 44.3427% ( 75) 00:08:22.730 10989.883 - 11040.295: 44.9982% ( 73) 00:08:22.730 11040.295 - 11090.708: 45.6088% ( 68) 00:08:22.730 11090.708 - 11141.120: 46.2554% ( 72) 00:08:22.730 11141.120 - 11191.532: 46.9199% ( 74) 00:08:22.730 11191.532 - 11241.945: 47.5665% ( 72) 00:08:22.730 11241.945 - 11292.357: 48.0783% ( 57) 00:08:22.730 11292.357 - 11342.769: 48.4734% ( 44) 00:08:22.730 11342.769 - 11393.182: 48.9224% ( 50) 00:08:22.730 11393.182 - 11443.594: 49.2726% ( 39) 00:08:22.730 11443.594 - 11494.006: 49.5959% ( 36) 00:08:22.730 11494.006 - 11544.418: 49.9102% ( 35) 00:08:22.730 11544.418 - 11594.831: 50.1976% ( 32) 00:08:22.730 11594.831 - 11645.243: 50.4670% ( 30) 00:08:22.730 11645.243 - 11695.655: 50.7453% ( 31) 00:08:22.730 11695.655 - 11746.068: 50.9519% ( 23) 00:08:22.730 11746.068 - 11796.480: 51.1315% ( 20) 00:08:22.730 11796.480 - 11846.892: 51.3290% ( 22) 00:08:22.730 11846.892 - 11897.305: 51.5266% ( 22) 00:08:22.730 11897.305 - 11947.717: 51.7331% ( 23) 00:08:22.730 11947.717 - 11998.129: 51.9127% ( 20) 00:08:22.730 11998.129 - 12048.542: 52.1372% ( 25) 00:08:22.730 12048.542 - 12098.954: 52.3976% ( 29) 00:08:22.730 12098.954 - 12149.366: 52.6491% ( 28) 00:08:22.730 12149.366 - 12199.778: 52.8646% ( 24) 00:08:22.730 12199.778 - 12250.191: 53.0711% ( 23) 00:08:22.730 12250.191 - 12300.603: 53.4213% ( 39) 00:08:22.730 12300.603 - 12351.015: 53.7267% ( 34) 00:08:22.730 12351.015 - 12401.428: 54.0769% ( 39) 00:08:22.730 12401.428 - 12451.840: 54.4271% ( 39) 00:08:22.730 12451.840 - 12502.252: 54.7593% ( 37) 00:08:22.730 12502.252 - 12552.665: 55.0198% ( 29) 00:08:22.730 12552.665 - 12603.077: 55.2981% ( 31) 00:08:22.730 12603.077 - 12653.489: 55.5765% ( 31) 00:08:22.730 12653.489 - 12703.902: 55.8459% ( 30) 00:08:22.730 12703.902 - 12754.314: 56.1243% ( 31) 00:08:22.730 12754.314 - 12804.726: 56.4027% ( 31) 00:08:22.730 12804.726 - 12855.138: 56.6721% ( 30) 00:08:22.730 12855.138 - 12905.551: 56.8876% ( 24) 00:08:22.730 12905.551 - 13006.375: 57.3006% ( 46) 00:08:22.730 13006.375 - 13107.200: 57.6598% ( 40) 00:08:22.730 13107.200 - 13208.025: 58.1358% ( 53) 00:08:22.730 13208.025 - 13308.849: 58.6835% ( 61) 00:08:22.730 13308.849 - 13409.674: 59.3211% ( 71) 00:08:22.730 13409.674 - 13510.498: 60.2550% ( 104) 00:08:22.730 13510.498 - 13611.323: 61.4314% ( 131) 00:08:22.730 13611.323 - 13712.148: 62.6616% ( 137) 00:08:22.730 13712.148 - 13812.972: 64.1074% ( 161) 00:08:22.730 13812.972 - 13913.797: 65.5172% ( 157) 00:08:22.730 13913.797 - 14014.622: 67.0438% ( 170) 00:08:22.730 14014.622 - 14115.446: 68.7500% ( 190) 00:08:22.730 14115.446 - 14216.271: 70.4741% ( 192) 00:08:22.730 14216.271 - 14317.095: 72.0456% ( 175) 00:08:22.730 14317.095 - 14417.920: 73.5902% ( 172) 00:08:22.730 14417.920 - 14518.745: 74.9551% ( 152) 00:08:22.730 14518.745 - 14619.569: 76.2392% ( 143) 00:08:22.730 14619.569 - 14720.394: 77.3976% ( 129) 00:08:22.730 14720.394 - 14821.218: 78.3854% ( 110) 00:08:22.730 14821.218 - 14922.043: 79.4091% ( 114) 00:08:22.730 14922.043 - 15022.868: 80.3520% ( 105) 00:08:22.730 15022.868 - 15123.692: 81.1692% ( 91) 00:08:22.730 15123.692 - 15224.517: 82.0851% ( 102) 00:08:22.730 15224.517 - 15325.342: 82.9203% ( 93) 00:08:22.730 15325.342 - 15426.166: 83.9350% ( 113) 00:08:22.730 15426.166 - 15526.991: 84.8150% ( 98) 00:08:22.730 15526.991 - 15627.815: 85.6322% ( 91) 00:08:22.730 15627.815 - 15728.640: 86.5122% ( 98) 00:08:22.730 15728.640 - 15829.465: 87.3024% ( 88) 00:08:22.730 15829.465 - 15930.289: 87.9759% ( 75) 00:08:22.730 15930.289 - 16031.114: 88.4968% ( 58) 00:08:22.730 16031.114 - 16131.938: 88.9278% ( 48) 00:08:22.730 16131.938 - 16232.763: 89.3409% ( 46) 00:08:22.730 16232.763 - 16333.588: 89.7629% ( 47) 00:08:22.730 16333.588 - 16434.412: 90.1850% ( 47) 00:08:22.730 16434.412 - 16535.237: 90.5711% ( 43) 00:08:22.730 16535.237 - 16636.062: 91.0920% ( 58) 00:08:22.730 16636.062 - 16736.886: 91.5140% ( 47) 00:08:22.730 16736.886 - 16837.711: 91.8732% ( 40) 00:08:22.730 16837.711 - 16938.535: 92.5018% ( 70) 00:08:22.730 16938.535 - 17039.360: 93.0136% ( 57) 00:08:22.730 17039.360 - 17140.185: 93.4177% ( 45) 00:08:22.730 17140.185 - 17241.009: 93.7859% ( 41) 00:08:22.730 17241.009 - 17341.834: 94.2439% ( 51) 00:08:22.730 17341.834 - 17442.658: 94.7737% ( 59) 00:08:22.730 17442.658 - 17543.483: 95.1598% ( 43) 00:08:22.730 17543.483 - 17644.308: 95.4831% ( 36) 00:08:22.730 17644.308 - 17745.132: 95.7435% ( 29) 00:08:22.730 17745.132 - 17845.957: 95.9770% ( 26) 00:08:22.730 17845.957 - 17946.782: 96.1566% ( 20) 00:08:22.730 17946.782 - 18047.606: 96.3272% ( 19) 00:08:22.730 18047.606 - 18148.431: 96.4889% ( 18) 00:08:22.730 18148.431 - 18249.255: 96.6056% ( 13) 00:08:22.730 18249.255 - 18350.080: 96.6864% ( 9) 00:08:22.730 18350.080 - 18450.905: 96.8121% ( 14) 00:08:22.730 18450.905 - 18551.729: 96.8840% ( 8) 00:08:22.730 18551.729 - 18652.554: 96.9648% ( 9) 00:08:22.730 18652.554 - 18753.378: 97.0815% ( 13) 00:08:22.730 18753.378 - 18854.203: 97.1983% ( 13) 00:08:22.730 18854.203 - 18955.028: 97.3060% ( 12) 00:08:22.730 18955.028 - 19055.852: 97.5036% ( 22) 00:08:22.730 19055.852 - 19156.677: 97.6832% ( 20) 00:08:22.730 19156.677 - 19257.502: 97.8448% ( 18) 00:08:22.730 19257.502 - 19358.326: 98.0424% ( 22) 00:08:22.730 19358.326 - 19459.151: 98.2489% ( 23) 00:08:22.730 19459.151 - 19559.975: 98.4016% ( 17) 00:08:22.730 19559.975 - 19660.800: 98.5453% ( 16) 00:08:22.730 19660.800 - 19761.625: 98.6530% ( 12) 00:08:22.730 19761.625 - 19862.449: 98.7608% ( 12) 00:08:22.730 19862.449 - 19963.274: 98.8685% ( 12) 00:08:22.730 19963.274 - 20064.098: 98.9853% ( 13) 00:08:22.730 20064.098 - 20164.923: 99.0751% ( 10) 00:08:22.730 20164.923 - 20265.748: 99.1379% ( 7) 00:08:22.730 20265.748 - 20366.572: 99.2008% ( 7) 00:08:22.730 20366.572 - 20467.397: 99.2636% ( 7) 00:08:22.730 20467.397 - 20568.222: 99.3265% ( 7) 00:08:22.730 20568.222 - 20669.046: 99.3534% ( 3) 00:08:22.730 20669.046 - 20769.871: 99.3894% ( 4) 00:08:22.730 20769.871 - 20870.695: 99.4073% ( 2) 00:08:22.730 20870.695 - 20971.520: 99.4253% ( 2) 00:08:22.730 24197.908 - 24298.732: 99.4343% ( 1) 00:08:22.730 24298.732 - 24399.557: 99.4612% ( 3) 00:08:22.730 24399.557 - 24500.382: 99.4971% ( 4) 00:08:22.730 24500.382 - 24601.206: 99.5330% ( 4) 00:08:22.730 24601.206 - 24702.031: 99.5690% ( 4) 00:08:22.730 24702.031 - 24802.855: 99.6049% ( 4) 00:08:22.730 24802.855 - 24903.680: 99.6318% ( 3) 00:08:22.730 24903.680 - 25004.505: 99.6677% ( 4) 00:08:22.730 25004.505 - 25105.329: 99.7037% ( 4) 00:08:22.730 25105.329 - 25206.154: 99.7396% ( 4) 00:08:22.730 25206.154 - 25306.978: 99.7665% ( 3) 00:08:22.730 25306.978 - 25407.803: 99.8024% ( 4) 00:08:22.731 25407.803 - 25508.628: 99.8294% ( 3) 00:08:22.731 25508.628 - 25609.452: 99.8653% ( 4) 00:08:22.731 25609.452 - 25710.277: 99.9012% ( 4) 00:08:22.731 25710.277 - 25811.102: 99.9371% ( 4) 00:08:22.731 25811.102 - 26012.751: 100.0000% ( 7) 00:08:22.731 00:08:22.731 10:14:01 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:24.118 Initializing NVMe Controllers 00:08:24.118 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:24.118 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:24.118 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:24.118 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:24.118 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:24.118 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:24.118 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:24.118 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:24.118 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:24.118 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:24.118 Initialization complete. Launching workers. 00:08:24.118 ======================================================== 00:08:24.118 Latency(us) 00:08:24.118 Device Information : IOPS MiB/s Average min max 00:08:24.118 PCIE (0000:00:13.0) NSID 1 from core 0: 11113.10 130.23 11529.12 5302.36 26468.12 00:08:24.118 PCIE (0000:00:10.0) NSID 1 from core 0: 11113.10 130.23 11526.32 5327.61 26627.01 00:08:24.118 PCIE (0000:00:11.0) NSID 1 from core 0: 11113.10 130.23 11519.00 5499.66 26408.59 00:08:24.118 PCIE (0000:00:12.0) NSID 1 from core 0: 11113.10 130.23 11511.86 5412.75 26296.31 00:08:24.118 PCIE (0000:00:12.0) NSID 2 from core 0: 11113.10 130.23 11504.95 5305.33 26428.24 00:08:24.118 PCIE (0000:00:12.0) NSID 3 from core 0: 11113.10 130.23 11498.06 4652.92 26319.58 00:08:24.118 ======================================================== 00:08:24.118 Total : 66678.58 781.39 11514.89 4652.92 26627.01 00:08:24.118 00:08:24.118 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:24.118 ================================================================================= 00:08:24.118 1.00000% : 5721.797us 00:08:24.118 10.00000% : 6755.249us 00:08:24.118 25.00000% : 9981.637us 00:08:24.118 50.00000% : 11544.418us 00:08:24.118 75.00000% : 13409.674us 00:08:24.118 90.00000% : 15224.517us 00:08:24.118 95.00000% : 16031.114us 00:08:24.118 98.00000% : 16535.237us 00:08:24.118 99.00000% : 16837.711us 00:08:24.118 99.50000% : 25508.628us 00:08:24.118 99.90000% : 26416.049us 00:08:24.118 99.99000% : 26617.698us 00:08:24.118 99.99900% : 26617.698us 00:08:24.118 99.99990% : 26617.698us 00:08:24.118 99.99999% : 26617.698us 00:08:24.118 00:08:24.118 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:24.118 ================================================================================= 00:08:24.118 1.00000% : 5696.591us 00:08:24.118 10.00000% : 6704.837us 00:08:24.118 25.00000% : 10032.049us 00:08:24.118 50.00000% : 11393.182us 00:08:24.118 75.00000% : 13510.498us 00:08:24.118 90.00000% : 15123.692us 00:08:24.118 95.00000% : 16232.763us 00:08:24.118 98.00000% : 16938.535us 00:08:24.118 99.00000% : 20366.572us 00:08:24.118 99.50000% : 25508.628us 00:08:24.118 99.90000% : 26416.049us 00:08:24.118 99.99000% : 26819.348us 00:08:24.118 99.99900% : 26819.348us 00:08:24.118 99.99990% : 26819.348us 00:08:24.118 99.99999% : 26819.348us 00:08:24.118 00:08:24.118 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:24.118 ================================================================================= 00:08:24.118 1.00000% : 5721.797us 00:08:24.118 10.00000% : 6553.600us 00:08:24.118 25.00000% : 10032.049us 00:08:24.118 50.00000% : 11393.182us 00:08:24.118 75.00000% : 13409.674us 00:08:24.118 90.00000% : 15224.517us 00:08:24.118 95.00000% : 16333.588us 00:08:24.118 98.00000% : 17241.009us 00:08:24.118 99.00000% : 20164.923us 00:08:24.118 99.50000% : 25407.803us 00:08:24.118 99.90000% : 26214.400us 00:08:24.118 99.99000% : 26416.049us 00:08:24.118 99.99900% : 26416.049us 00:08:24.118 99.99990% : 26416.049us 00:08:24.118 99.99999% : 26416.049us 00:08:24.118 00:08:24.118 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:24.118 ================================================================================= 00:08:24.118 1.00000% : 5721.797us 00:08:24.118 10.00000% : 6654.425us 00:08:24.118 25.00000% : 9981.637us 00:08:24.118 50.00000% : 11443.594us 00:08:24.118 75.00000% : 13409.674us 00:08:24.118 90.00000% : 15224.517us 00:08:24.118 95.00000% : 16232.763us 00:08:24.118 98.00000% : 17039.360us 00:08:24.118 99.00000% : 20366.572us 00:08:24.118 99.50000% : 25609.452us 00:08:24.118 99.90000% : 26214.400us 00:08:24.118 99.99000% : 26416.049us 00:08:24.118 99.99900% : 26416.049us 00:08:24.118 99.99990% : 26416.049us 00:08:24.118 99.99999% : 26416.049us 00:08:24.118 00:08:24.118 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:24.118 ================================================================================= 00:08:24.118 1.00000% : 5671.385us 00:08:24.118 10.00000% : 6704.837us 00:08:24.118 25.00000% : 9981.637us 00:08:24.118 50.00000% : 11544.418us 00:08:24.118 75.00000% : 13308.849us 00:08:24.118 90.00000% : 15325.342us 00:08:24.118 95.00000% : 16232.763us 00:08:24.118 98.00000% : 16736.886us 00:08:24.118 99.00000% : 20164.923us 00:08:24.118 99.50000% : 25407.803us 00:08:24.118 99.90000% : 26416.049us 00:08:24.118 99.99000% : 26416.049us 00:08:24.118 99.99900% : 26617.698us 00:08:24.118 99.99990% : 26617.698us 00:08:24.118 99.99999% : 26617.698us 00:08:24.118 00:08:24.118 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:24.118 ================================================================================= 00:08:24.118 1.00000% : 5646.178us 00:08:24.118 10.00000% : 6604.012us 00:08:24.118 25.00000% : 9981.637us 00:08:24.118 50.00000% : 11544.418us 00:08:24.118 75.00000% : 13308.849us 00:08:24.118 90.00000% : 15325.342us 00:08:24.118 95.00000% : 16031.114us 00:08:24.118 98.00000% : 16736.886us 00:08:24.118 99.00000% : 20064.098us 00:08:24.118 99.50000% : 25407.803us 00:08:24.118 99.90000% : 26214.400us 00:08:24.118 99.99000% : 26416.049us 00:08:24.118 99.99900% : 26416.049us 00:08:24.118 99.99990% : 26416.049us 00:08:24.118 99.99999% : 26416.049us 00:08:24.118 00:08:24.118 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:24.118 ============================================================================== 00:08:24.118 Range in us Cumulative IO count 00:08:24.118 5293.292 - 5318.498: 0.0090% ( 1) 00:08:24.118 5318.498 - 5343.705: 0.0180% ( 1) 00:08:24.118 5368.911 - 5394.117: 0.0269% ( 1) 00:08:24.118 5419.323 - 5444.529: 0.0359% ( 1) 00:08:24.118 5494.942 - 5520.148: 0.0449% ( 1) 00:08:24.118 5520.148 - 5545.354: 0.0988% ( 6) 00:08:24.118 5545.354 - 5570.560: 0.1796% ( 9) 00:08:24.118 5570.560 - 5595.766: 0.2514% ( 8) 00:08:24.118 5595.766 - 5620.972: 0.3143% ( 7) 00:08:24.118 5620.972 - 5646.178: 0.4221% ( 12) 00:08:24.118 5646.178 - 5671.385: 0.5298% ( 12) 00:08:24.118 5671.385 - 5696.591: 0.8172% ( 32) 00:08:24.118 5696.591 - 5721.797: 1.1943% ( 42) 00:08:24.118 5721.797 - 5747.003: 1.3290% ( 15) 00:08:24.118 5747.003 - 5772.209: 1.5984% ( 30) 00:08:24.118 5772.209 - 5797.415: 1.7241% ( 14) 00:08:24.118 5797.415 - 5822.622: 1.8499% ( 14) 00:08:24.118 5822.622 - 5847.828: 2.0205% ( 19) 00:08:24.118 5847.828 - 5873.034: 2.2270% ( 23) 00:08:24.118 5873.034 - 5898.240: 2.3527% ( 14) 00:08:24.118 5898.240 - 5923.446: 2.5772% ( 25) 00:08:24.118 5923.446 - 5948.652: 2.7299% ( 17) 00:08:24.118 5948.652 - 5973.858: 2.8556% ( 14) 00:08:24.118 5973.858 - 5999.065: 2.9903% ( 15) 00:08:24.118 5999.065 - 6024.271: 3.2597% ( 30) 00:08:24.118 6024.271 - 6049.477: 3.4213% ( 18) 00:08:24.118 6049.477 - 6074.683: 3.5740% ( 17) 00:08:24.118 6074.683 - 6099.889: 3.9871% ( 46) 00:08:24.118 6099.889 - 6125.095: 4.4361% ( 50) 00:08:24.118 6125.095 - 6150.302: 4.7953% ( 40) 00:08:24.118 6150.302 - 6175.508: 5.0647% ( 30) 00:08:24.118 6175.508 - 6200.714: 5.2712% ( 23) 00:08:24.118 6200.714 - 6225.920: 5.7651% ( 55) 00:08:24.118 6225.920 - 6251.126: 6.2320% ( 52) 00:08:24.118 6251.126 - 6276.332: 6.8517% ( 69) 00:08:24.118 6276.332 - 6301.538: 7.2557% ( 45) 00:08:24.118 6301.538 - 6326.745: 7.4353% ( 20) 00:08:24.118 6326.745 - 6351.951: 7.6598% ( 25) 00:08:24.118 6351.951 - 6377.157: 7.9652% ( 34) 00:08:24.118 6377.157 - 6402.363: 8.4501% ( 54) 00:08:24.118 6402.363 - 6427.569: 8.6027% ( 17) 00:08:24.118 6427.569 - 6452.775: 8.8631% ( 29) 00:08:24.118 6452.775 - 6503.188: 9.1505% ( 32) 00:08:24.118 6503.188 - 6553.600: 9.5187% ( 41) 00:08:24.118 6553.600 - 6604.012: 9.6624% ( 16) 00:08:24.118 6604.012 - 6654.425: 9.7611% ( 11) 00:08:24.118 6654.425 - 6704.837: 9.8869% ( 14) 00:08:24.118 6704.837 - 6755.249: 10.1922% ( 34) 00:08:24.118 6755.249 - 6805.662: 10.2460% ( 6) 00:08:24.118 6805.662 - 6856.074: 10.2730% ( 3) 00:08:24.118 6856.074 - 6906.486: 10.3179% ( 5) 00:08:24.119 6906.486 - 6956.898: 10.3448% ( 3) 00:08:24.119 6956.898 - 7007.311: 10.3628% ( 2) 00:08:24.119 7007.311 - 7057.723: 10.3987% ( 4) 00:08:24.119 7057.723 - 7108.135: 10.4526% ( 6) 00:08:24.119 7108.135 - 7158.548: 10.5603% ( 12) 00:08:24.119 7158.548 - 7208.960: 10.8926% ( 37) 00:08:24.119 7208.960 - 7259.372: 10.9824% ( 10) 00:08:24.119 7259.372 - 7309.785: 11.0812% ( 11) 00:08:24.119 7309.785 - 7360.197: 11.3596% ( 31) 00:08:24.119 7360.197 - 7410.609: 11.4134% ( 6) 00:08:24.119 7410.609 - 7461.022: 11.4494% ( 4) 00:08:24.119 7461.022 - 7511.434: 11.4853% ( 4) 00:08:24.119 7511.434 - 7561.846: 11.5571% ( 8) 00:08:24.119 7561.846 - 7612.258: 11.6290% ( 8) 00:08:24.119 7612.258 - 7662.671: 11.7188% ( 10) 00:08:24.119 7662.671 - 7713.083: 12.0600% ( 38) 00:08:24.119 7713.083 - 7763.495: 12.1139% ( 6) 00:08:24.119 7763.495 - 7813.908: 12.2126% ( 11) 00:08:24.119 7813.908 - 7864.320: 12.4461% ( 26) 00:08:24.119 7864.320 - 7914.732: 12.5269% ( 9) 00:08:24.119 7914.732 - 7965.145: 12.5808% ( 6) 00:08:24.119 7965.145 - 8015.557: 12.6886% ( 12) 00:08:24.119 8015.557 - 8065.969: 12.8682% ( 20) 00:08:24.119 8065.969 - 8116.382: 13.1376% ( 30) 00:08:24.119 8116.382 - 8166.794: 13.4968% ( 40) 00:08:24.119 8166.794 - 8217.206: 13.7033% ( 23) 00:08:24.119 8217.206 - 8267.618: 14.0715% ( 41) 00:08:24.119 8267.618 - 8318.031: 14.2241% ( 17) 00:08:24.119 8318.031 - 8368.443: 14.3139% ( 10) 00:08:24.119 8368.443 - 8418.855: 14.3588% ( 5) 00:08:24.119 8418.855 - 8469.268: 14.4486% ( 10) 00:08:24.119 8469.268 - 8519.680: 14.6103% ( 18) 00:08:24.119 8519.680 - 8570.092: 14.7540% ( 16) 00:08:24.119 8570.092 - 8620.505: 14.8348% ( 9) 00:08:24.119 8620.505 - 8670.917: 14.8886% ( 6) 00:08:24.119 8670.917 - 8721.329: 15.0323% ( 16) 00:08:24.119 8721.329 - 8771.742: 15.2478% ( 24) 00:08:24.119 8771.742 - 8822.154: 15.4813% ( 26) 00:08:24.119 8822.154 - 8872.566: 15.9483% ( 52) 00:08:24.119 8872.566 - 8922.978: 16.2446% ( 33) 00:08:24.119 8922.978 - 8973.391: 16.5140% ( 30) 00:08:24.119 8973.391 - 9023.803: 16.9361% ( 47) 00:08:24.119 9023.803 - 9074.215: 17.3761% ( 49) 00:08:24.119 9074.215 - 9124.628: 17.6814% ( 34) 00:08:24.119 9124.628 - 9175.040: 17.8790% ( 22) 00:08:24.119 9175.040 - 9225.452: 18.0945% ( 24) 00:08:24.119 9225.452 - 9275.865: 18.3190% ( 25) 00:08:24.119 9275.865 - 9326.277: 18.5165% ( 22) 00:08:24.119 9326.277 - 9376.689: 18.7500% ( 26) 00:08:24.119 9376.689 - 9427.102: 19.0194% ( 30) 00:08:24.119 9427.102 - 9477.514: 19.2259% ( 23) 00:08:24.119 9477.514 - 9527.926: 19.4504% ( 25) 00:08:24.119 9527.926 - 9578.338: 19.9892% ( 60) 00:08:24.119 9578.338 - 9628.751: 20.3843% ( 44) 00:08:24.119 9628.751 - 9679.163: 20.9591% ( 64) 00:08:24.119 9679.163 - 9729.575: 21.5966% ( 71) 00:08:24.119 9729.575 - 9779.988: 22.0456% ( 50) 00:08:24.119 9779.988 - 9830.400: 22.6203% ( 64) 00:08:24.119 9830.400 - 9880.812: 23.5004% ( 98) 00:08:24.119 9880.812 - 9931.225: 24.3894% ( 99) 00:08:24.119 9931.225 - 9981.637: 25.3592% ( 108) 00:08:24.119 9981.637 - 10032.049: 26.2033% ( 94) 00:08:24.119 10032.049 - 10082.462: 26.9397% ( 82) 00:08:24.119 10082.462 - 10132.874: 27.8017% ( 96) 00:08:24.119 10132.874 - 10183.286: 28.8703% ( 119) 00:08:24.119 10183.286 - 10233.698: 29.7055% ( 93) 00:08:24.119 10233.698 - 10284.111: 30.3700% ( 74) 00:08:24.119 10284.111 - 10334.523: 31.1602% ( 88) 00:08:24.119 10334.523 - 10384.935: 31.9864% ( 92) 00:08:24.119 10384.935 - 10435.348: 32.8215% ( 93) 00:08:24.119 10435.348 - 10485.760: 33.6835% ( 96) 00:08:24.119 10485.760 - 10536.172: 34.5546% ( 97) 00:08:24.119 10536.172 - 10586.585: 35.4526% ( 100) 00:08:24.119 10586.585 - 10636.997: 36.4853% ( 115) 00:08:24.119 10636.997 - 10687.409: 37.6078% ( 125) 00:08:24.119 10687.409 - 10737.822: 38.8470% ( 138) 00:08:24.119 10737.822 - 10788.234: 40.1401% ( 144) 00:08:24.119 10788.234 - 10838.646: 41.3165% ( 131) 00:08:24.119 10838.646 - 10889.058: 42.2414% ( 103) 00:08:24.119 10889.058 - 10939.471: 42.9418% ( 78) 00:08:24.119 10939.471 - 10989.883: 43.4626% ( 58) 00:08:24.119 10989.883 - 11040.295: 44.0284% ( 63) 00:08:24.119 11040.295 - 11090.708: 44.5941% ( 63) 00:08:24.119 11090.708 - 11141.120: 45.0521% ( 51) 00:08:24.119 11141.120 - 11191.532: 45.5460% ( 55) 00:08:24.119 11191.532 - 11241.945: 46.1476% ( 67) 00:08:24.119 11241.945 - 11292.357: 46.7134% ( 63) 00:08:24.119 11292.357 - 11342.769: 47.3599% ( 72) 00:08:24.119 11342.769 - 11393.182: 48.0963% ( 82) 00:08:24.119 11393.182 - 11443.594: 48.7967% ( 78) 00:08:24.119 11443.594 - 11494.006: 49.4163% ( 69) 00:08:24.119 11494.006 - 11544.418: 50.1706% ( 84) 00:08:24.119 11544.418 - 11594.831: 50.9159% ( 83) 00:08:24.119 11594.831 - 11645.243: 51.7780% ( 96) 00:08:24.119 11645.243 - 11695.655: 52.5323% ( 84) 00:08:24.119 11695.655 - 11746.068: 53.1789% ( 72) 00:08:24.119 11746.068 - 11796.480: 53.7087% ( 59) 00:08:24.119 11796.480 - 11846.892: 54.1487% ( 49) 00:08:24.119 11846.892 - 11897.305: 54.5259% ( 42) 00:08:24.119 11897.305 - 11947.717: 54.8940% ( 41) 00:08:24.119 11947.717 - 11998.129: 55.2892% ( 44) 00:08:24.119 11998.129 - 12048.542: 55.7381% ( 50) 00:08:24.119 12048.542 - 12098.954: 56.3039% ( 63) 00:08:24.119 12098.954 - 12149.366: 56.9145% ( 68) 00:08:24.119 12149.366 - 12199.778: 57.6060% ( 77) 00:08:24.119 12199.778 - 12250.191: 58.4052% ( 89) 00:08:24.119 12250.191 - 12300.603: 59.2942% ( 99) 00:08:24.119 12300.603 - 12351.015: 60.3089% ( 113) 00:08:24.119 12351.015 - 12401.428: 60.9195% ( 68) 00:08:24.119 12401.428 - 12451.840: 61.5751% ( 73) 00:08:24.119 12451.840 - 12502.252: 62.3114% ( 82) 00:08:24.119 12502.252 - 12552.665: 63.0119% ( 78) 00:08:24.119 12552.665 - 12603.077: 63.6764% ( 74) 00:08:24.119 12603.077 - 12653.489: 64.6552% ( 109) 00:08:24.119 12653.489 - 12703.902: 65.5352% ( 98) 00:08:24.119 12703.902 - 12754.314: 66.5140% ( 109) 00:08:24.119 12754.314 - 12804.726: 67.2144% ( 78) 00:08:24.119 12804.726 - 12855.138: 68.0226% ( 90) 00:08:24.119 12855.138 - 12905.551: 68.7141% ( 77) 00:08:24.119 12905.551 - 13006.375: 70.1778% ( 163) 00:08:24.119 13006.375 - 13107.200: 71.5068% ( 148) 00:08:24.119 13107.200 - 13208.025: 72.7101% ( 134) 00:08:24.119 13208.025 - 13308.849: 73.9134% ( 134) 00:08:24.119 13308.849 - 13409.674: 75.0449% ( 126) 00:08:24.119 13409.674 - 13510.498: 76.1225% ( 120) 00:08:24.119 13510.498 - 13611.323: 77.3617% ( 138) 00:08:24.119 13611.323 - 13712.148: 78.5201% ( 129) 00:08:24.119 13712.148 - 13812.972: 79.7863% ( 141) 00:08:24.119 13812.972 - 13913.797: 80.9088% ( 125) 00:08:24.119 13913.797 - 14014.622: 81.9325% ( 114) 00:08:24.119 14014.622 - 14115.446: 82.8215% ( 99) 00:08:24.119 14115.446 - 14216.271: 83.5758% ( 84) 00:08:24.119 14216.271 - 14317.095: 84.3840% ( 90) 00:08:24.119 14317.095 - 14417.920: 85.0665% ( 76) 00:08:24.119 14417.920 - 14518.745: 85.7579% ( 77) 00:08:24.119 14518.745 - 14619.569: 86.5571% ( 89) 00:08:24.119 14619.569 - 14720.394: 87.3114% ( 84) 00:08:24.119 14720.394 - 14821.218: 88.1645% ( 95) 00:08:24.119 14821.218 - 14922.043: 88.7302% ( 63) 00:08:24.119 14922.043 - 15022.868: 89.2241% ( 55) 00:08:24.119 15022.868 - 15123.692: 89.6372% ( 46) 00:08:24.119 15123.692 - 15224.517: 90.0413% ( 45) 00:08:24.119 15224.517 - 15325.342: 90.4634% ( 47) 00:08:24.119 15325.342 - 15426.166: 90.7866% ( 36) 00:08:24.119 15426.166 - 15526.991: 91.2536% ( 52) 00:08:24.119 15526.991 - 15627.815: 91.8463% ( 66) 00:08:24.119 15627.815 - 15728.640: 92.4928% ( 72) 00:08:24.119 15728.640 - 15829.465: 93.2292% ( 82) 00:08:24.119 15829.465 - 15930.289: 93.9745% ( 83) 00:08:24.119 15930.289 - 16031.114: 95.0251% ( 117) 00:08:24.119 16031.114 - 16131.938: 95.7076% ( 76) 00:08:24.119 16131.938 - 16232.763: 96.4440% ( 82) 00:08:24.119 16232.763 - 16333.588: 97.1534% ( 79) 00:08:24.119 16333.588 - 16434.412: 97.7101% ( 62) 00:08:24.119 16434.412 - 16535.237: 98.2848% ( 64) 00:08:24.119 16535.237 - 16636.062: 98.7249% ( 49) 00:08:24.119 16636.062 - 16736.886: 98.9673% ( 27) 00:08:24.119 16736.886 - 16837.711: 99.0661% ( 11) 00:08:24.119 16837.711 - 16938.535: 99.1559% ( 10) 00:08:24.119 16938.535 - 17039.360: 99.2277% ( 8) 00:08:24.119 17039.360 - 17140.185: 99.2816% ( 6) 00:08:24.119 17140.185 - 17241.009: 99.3445% ( 7) 00:08:24.119 17241.009 - 17341.834: 99.3983% ( 6) 00:08:24.119 17341.834 - 17442.658: 99.4253% ( 3) 00:08:24.119 25206.154 - 25306.978: 99.4612% ( 4) 00:08:24.119 25306.978 - 25407.803: 99.4792% ( 2) 00:08:24.119 25407.803 - 25508.628: 99.5061% ( 3) 00:08:24.119 25508.628 - 25609.452: 99.5420% ( 4) 00:08:24.119 25609.452 - 25710.277: 99.6588% ( 13) 00:08:24.119 25710.277 - 25811.102: 99.7037% ( 5) 00:08:24.119 25811.102 - 26012.751: 99.8024% ( 11) 00:08:24.119 26012.751 - 26214.400: 99.8922% ( 10) 00:08:24.119 26214.400 - 26416.049: 99.9820% ( 10) 00:08:24.119 26416.049 - 26617.698: 100.0000% ( 2) 00:08:24.119 00:08:24.119 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:24.119 ============================================================================== 00:08:24.119 Range in us Cumulative IO count 00:08:24.119 5318.498 - 5343.705: 0.0090% ( 1) 00:08:24.119 5368.911 - 5394.117: 0.0180% ( 1) 00:08:24.119 5469.735 - 5494.942: 0.0269% ( 1) 00:08:24.119 5545.354 - 5570.560: 0.0718% ( 5) 00:08:24.119 5570.560 - 5595.766: 0.2155% ( 16) 00:08:24.120 5595.766 - 5620.972: 0.3682% ( 17) 00:08:24.120 5620.972 - 5646.178: 0.5837% ( 24) 00:08:24.120 5646.178 - 5671.385: 0.8172% ( 26) 00:08:24.120 5671.385 - 5696.591: 1.0147% ( 22) 00:08:24.120 5696.591 - 5721.797: 1.2302% ( 24) 00:08:24.120 5721.797 - 5747.003: 1.4547% ( 25) 00:08:24.120 5747.003 - 5772.209: 1.6433% ( 21) 00:08:24.120 5772.209 - 5797.415: 1.8678% ( 25) 00:08:24.120 5797.415 - 5822.622: 2.0923% ( 25) 00:08:24.120 5822.622 - 5847.828: 2.4246% ( 37) 00:08:24.120 5847.828 - 5873.034: 2.5772% ( 17) 00:08:24.120 5873.034 - 5898.240: 2.7478% ( 19) 00:08:24.120 5898.240 - 5923.446: 2.8736% ( 14) 00:08:24.120 5923.446 - 5948.652: 3.0981% ( 25) 00:08:24.120 5948.652 - 5973.858: 3.2687% ( 19) 00:08:24.120 5973.858 - 5999.065: 3.5022% ( 26) 00:08:24.120 5999.065 - 6024.271: 3.6907% ( 21) 00:08:24.120 6024.271 - 6049.477: 3.8793% ( 21) 00:08:24.120 6049.477 - 6074.683: 4.2116% ( 37) 00:08:24.120 6074.683 - 6099.889: 4.5259% ( 35) 00:08:24.120 6099.889 - 6125.095: 4.8222% ( 33) 00:08:24.120 6125.095 - 6150.302: 5.0377% ( 24) 00:08:24.120 6150.302 - 6175.508: 5.2443% ( 23) 00:08:24.120 6175.508 - 6200.714: 5.5136% ( 30) 00:08:24.120 6200.714 - 6225.920: 5.7741% ( 29) 00:08:24.120 6225.920 - 6251.126: 6.0075% ( 26) 00:08:24.120 6251.126 - 6276.332: 6.3039% ( 33) 00:08:24.120 6276.332 - 6301.538: 6.7080% ( 45) 00:08:24.120 6301.538 - 6326.745: 7.0492% ( 38) 00:08:24.120 6326.745 - 6351.951: 7.3994% ( 39) 00:08:24.120 6351.951 - 6377.157: 7.7676% ( 41) 00:08:24.120 6377.157 - 6402.363: 8.1537% ( 43) 00:08:24.120 6402.363 - 6427.569: 8.5219% ( 41) 00:08:24.120 6427.569 - 6452.775: 8.7644% ( 27) 00:08:24.120 6452.775 - 6503.188: 9.2403% ( 53) 00:08:24.120 6503.188 - 6553.600: 9.5546% ( 35) 00:08:24.120 6553.600 - 6604.012: 9.7791% ( 25) 00:08:24.120 6604.012 - 6654.425: 9.9677% ( 21) 00:08:24.120 6654.425 - 6704.837: 10.1114% ( 16) 00:08:24.120 6704.837 - 6755.249: 10.1473% ( 4) 00:08:24.120 6755.249 - 6805.662: 10.2101% ( 7) 00:08:24.120 6805.662 - 6856.074: 10.2999% ( 10) 00:08:24.120 6856.074 - 6906.486: 10.3269% ( 3) 00:08:24.120 6906.486 - 6956.898: 10.3358% ( 1) 00:08:24.120 6956.898 - 7007.311: 10.3718% ( 4) 00:08:24.120 7007.311 - 7057.723: 10.3987% ( 3) 00:08:24.120 7057.723 - 7108.135: 10.4167% ( 2) 00:08:24.120 7108.135 - 7158.548: 10.4616% ( 5) 00:08:24.120 7158.548 - 7208.960: 10.6052% ( 16) 00:08:24.120 7208.960 - 7259.372: 10.7130% ( 12) 00:08:24.120 7259.372 - 7309.785: 10.7938% ( 9) 00:08:24.120 7309.785 - 7360.197: 10.8836% ( 10) 00:08:24.120 7360.197 - 7410.609: 10.9824% ( 11) 00:08:24.120 7410.609 - 7461.022: 11.1889% ( 23) 00:08:24.120 7461.022 - 7511.434: 11.3955% ( 23) 00:08:24.120 7511.434 - 7561.846: 11.6559% ( 29) 00:08:24.120 7561.846 - 7612.258: 11.8624% ( 23) 00:08:24.120 7612.258 - 7662.671: 12.2665% ( 45) 00:08:24.120 7662.671 - 7713.083: 12.5449% ( 31) 00:08:24.120 7713.083 - 7763.495: 12.6706% ( 14) 00:08:24.120 7763.495 - 7813.908: 12.8233% ( 17) 00:08:24.120 7813.908 - 7864.320: 12.9310% ( 12) 00:08:24.120 7864.320 - 7914.732: 13.0388% ( 12) 00:08:24.120 7914.732 - 7965.145: 13.2274% ( 21) 00:08:24.120 7965.145 - 8015.557: 13.4070% ( 20) 00:08:24.120 8015.557 - 8065.969: 13.7302% ( 36) 00:08:24.120 8065.969 - 8116.382: 13.8919% ( 18) 00:08:24.120 8116.382 - 8166.794: 14.0445% ( 17) 00:08:24.120 8166.794 - 8217.206: 14.0984% ( 6) 00:08:24.120 8217.206 - 8267.618: 14.1523% ( 6) 00:08:24.120 8267.618 - 8318.031: 14.2780% ( 14) 00:08:24.120 8318.031 - 8368.443: 14.4935% ( 24) 00:08:24.120 8368.443 - 8418.855: 14.6462% ( 17) 00:08:24.120 8418.855 - 8469.268: 14.7360% ( 10) 00:08:24.120 8469.268 - 8519.680: 14.8886% ( 17) 00:08:24.120 8519.680 - 8570.092: 15.0952% ( 23) 00:08:24.120 8570.092 - 8620.505: 15.1760% ( 9) 00:08:24.120 8620.505 - 8670.917: 15.3107% ( 15) 00:08:24.120 8670.917 - 8721.329: 15.4274% ( 13) 00:08:24.120 8721.329 - 8771.742: 15.5262% ( 11) 00:08:24.120 8771.742 - 8822.154: 15.6430% ( 13) 00:08:24.120 8822.154 - 8872.566: 15.7956% ( 17) 00:08:24.120 8872.566 - 8922.978: 15.9393% ( 16) 00:08:24.120 8922.978 - 8973.391: 16.2446% ( 34) 00:08:24.120 8973.391 - 9023.803: 16.6397% ( 44) 00:08:24.120 9023.803 - 9074.215: 16.9361% ( 33) 00:08:24.120 9074.215 - 9124.628: 17.3132% ( 42) 00:08:24.120 9124.628 - 9175.040: 17.6275% ( 35) 00:08:24.120 9175.040 - 9225.452: 17.8251% ( 22) 00:08:24.120 9225.452 - 9275.865: 18.1214% ( 33) 00:08:24.120 9275.865 - 9326.277: 18.4267% ( 34) 00:08:24.120 9326.277 - 9376.689: 18.6602% ( 26) 00:08:24.120 9376.689 - 9427.102: 18.9745% ( 35) 00:08:24.120 9427.102 - 9477.514: 19.2619% ( 32) 00:08:24.120 9477.514 - 9527.926: 19.7647% ( 56) 00:08:24.120 9527.926 - 9578.338: 20.2676% ( 56) 00:08:24.120 9578.338 - 9628.751: 20.8603% ( 66) 00:08:24.120 9628.751 - 9679.163: 21.2733% ( 46) 00:08:24.120 9679.163 - 9729.575: 21.9199% ( 72) 00:08:24.120 9729.575 - 9779.988: 22.3958% ( 53) 00:08:24.120 9779.988 - 9830.400: 22.9795% ( 65) 00:08:24.120 9830.400 - 9880.812: 23.6710% ( 77) 00:08:24.120 9880.812 - 9931.225: 24.1469% ( 53) 00:08:24.120 9931.225 - 9981.637: 24.9731% ( 92) 00:08:24.120 9981.637 - 10032.049: 25.7453% ( 86) 00:08:24.120 10032.049 - 10082.462: 26.8499% ( 123) 00:08:24.120 10082.462 - 10132.874: 27.8107% ( 107) 00:08:24.120 10132.874 - 10183.286: 28.7356% ( 103) 00:08:24.120 10183.286 - 10233.698: 29.6695% ( 104) 00:08:24.120 10233.698 - 10284.111: 30.8998% ( 137) 00:08:24.120 10284.111 - 10334.523: 32.0133% ( 124) 00:08:24.120 10334.523 - 10384.935: 33.1717% ( 129) 00:08:24.120 10384.935 - 10435.348: 33.9619% ( 88) 00:08:24.120 10435.348 - 10485.760: 34.6893% ( 81) 00:08:24.120 10485.760 - 10536.172: 35.6322% ( 105) 00:08:24.120 10536.172 - 10586.585: 36.5571% ( 103) 00:08:24.120 10586.585 - 10636.997: 37.4910% ( 104) 00:08:24.120 10636.997 - 10687.409: 38.3531% ( 96) 00:08:24.120 10687.409 - 10737.822: 39.3858% ( 115) 00:08:24.120 10737.822 - 10788.234: 40.2658% ( 98) 00:08:24.120 10788.234 - 10838.646: 41.1099% ( 94) 00:08:24.120 10838.646 - 10889.058: 42.0259% ( 102) 00:08:24.120 10889.058 - 10939.471: 42.8251% ( 89) 00:08:24.120 10939.471 - 10989.883: 43.4986% ( 75) 00:08:24.120 10989.883 - 11040.295: 44.2978% ( 89) 00:08:24.120 11040.295 - 11090.708: 45.0521% ( 84) 00:08:24.120 11090.708 - 11141.120: 45.9501% ( 100) 00:08:24.120 11141.120 - 11191.532: 46.7313% ( 87) 00:08:24.120 11191.532 - 11241.945: 47.5665% ( 93) 00:08:24.120 11241.945 - 11292.357: 48.4465% ( 98) 00:08:24.120 11292.357 - 11342.769: 49.2816% ( 93) 00:08:24.120 11342.769 - 11393.182: 50.0269% ( 83) 00:08:24.120 11393.182 - 11443.594: 50.7992% ( 86) 00:08:24.120 11443.594 - 11494.006: 51.5266% ( 81) 00:08:24.120 11494.006 - 11544.418: 51.9576% ( 48) 00:08:24.120 11544.418 - 11594.831: 52.5233% ( 63) 00:08:24.120 11594.831 - 11645.243: 52.9723% ( 50) 00:08:24.120 11645.243 - 11695.655: 53.5381% ( 63) 00:08:24.120 11695.655 - 11746.068: 54.0050% ( 52) 00:08:24.120 11746.068 - 11796.480: 54.5348% ( 59) 00:08:24.120 11796.480 - 11846.892: 55.1814% ( 72) 00:08:24.120 11846.892 - 11897.305: 55.7741% ( 66) 00:08:24.120 11897.305 - 11947.717: 56.3847% ( 68) 00:08:24.120 11947.717 - 11998.129: 57.0761% ( 77) 00:08:24.120 11998.129 - 12048.542: 57.9562% ( 98) 00:08:24.120 12048.542 - 12098.954: 58.4321% ( 53) 00:08:24.120 12098.954 - 12149.366: 58.8272% ( 44) 00:08:24.120 12149.366 - 12199.778: 59.3032% ( 53) 00:08:24.120 12199.778 - 12250.191: 59.8509% ( 61) 00:08:24.120 12250.191 - 12300.603: 60.3987% ( 61) 00:08:24.120 12300.603 - 12351.015: 60.8657% ( 52) 00:08:24.120 12351.015 - 12401.428: 61.5841% ( 80) 00:08:24.120 12401.428 - 12451.840: 62.1049% ( 58) 00:08:24.120 12451.840 - 12502.252: 62.5629% ( 51) 00:08:24.120 12502.252 - 12552.665: 63.2453% ( 76) 00:08:24.120 12552.665 - 12603.077: 63.8021% ( 62) 00:08:24.120 12603.077 - 12653.489: 64.4756% ( 75) 00:08:24.120 12653.489 - 12703.902: 65.1940% ( 80) 00:08:24.120 12703.902 - 12754.314: 65.8315% ( 71) 00:08:24.120 12754.314 - 12804.726: 66.6487% ( 91) 00:08:24.120 12804.726 - 12855.138: 67.1695% ( 58) 00:08:24.120 12855.138 - 12905.551: 67.9239% ( 84) 00:08:24.120 12905.551 - 13006.375: 69.2708% ( 150) 00:08:24.120 13006.375 - 13107.200: 70.6268% ( 151) 00:08:24.120 13107.200 - 13208.025: 71.9828% ( 151) 00:08:24.120 13208.025 - 13308.849: 73.4285% ( 161) 00:08:24.120 13308.849 - 13409.674: 74.7396% ( 146) 00:08:24.120 13409.674 - 13510.498: 76.0955% ( 151) 00:08:24.120 13510.498 - 13611.323: 77.3348% ( 138) 00:08:24.120 13611.323 - 13712.148: 78.5201% ( 132) 00:08:24.120 13712.148 - 13812.972: 79.7773% ( 140) 00:08:24.120 13812.972 - 13913.797: 81.1692% ( 155) 00:08:24.120 13913.797 - 14014.622: 82.0941% ( 103) 00:08:24.120 14014.622 - 14115.446: 83.1897% ( 122) 00:08:24.120 14115.446 - 14216.271: 84.0787% ( 99) 00:08:24.120 14216.271 - 14317.095: 84.9856% ( 101) 00:08:24.120 14317.095 - 14417.920: 85.8657% ( 98) 00:08:24.120 14417.920 - 14518.745: 86.7816% ( 102) 00:08:24.120 14518.745 - 14619.569: 87.3653% ( 65) 00:08:24.120 14619.569 - 14720.394: 87.9580% ( 66) 00:08:24.120 14720.394 - 14821.218: 88.5686% ( 68) 00:08:24.121 14821.218 - 14922.043: 89.2780% ( 79) 00:08:24.121 14922.043 - 15022.868: 89.8527% ( 64) 00:08:24.121 15022.868 - 15123.692: 90.2927% ( 49) 00:08:24.121 15123.692 - 15224.517: 90.7238% ( 48) 00:08:24.121 15224.517 - 15325.342: 91.2805% ( 62) 00:08:24.121 15325.342 - 15426.166: 91.7475% ( 52) 00:08:24.121 15426.166 - 15526.991: 92.0438% ( 33) 00:08:24.121 15526.991 - 15627.815: 92.3851% ( 38) 00:08:24.121 15627.815 - 15728.640: 92.8790% ( 55) 00:08:24.121 15728.640 - 15829.465: 93.3818% ( 56) 00:08:24.121 15829.465 - 15930.289: 94.0912% ( 79) 00:08:24.121 15930.289 - 16031.114: 94.4864% ( 44) 00:08:24.121 16031.114 - 16131.938: 94.9892% ( 56) 00:08:24.121 16131.938 - 16232.763: 95.7166% ( 81) 00:08:24.121 16232.763 - 16333.588: 96.1656% ( 50) 00:08:24.121 16333.588 - 16434.412: 96.4889% ( 36) 00:08:24.121 16434.412 - 16535.237: 96.7942% ( 34) 00:08:24.121 16535.237 - 16636.062: 97.2252% ( 48) 00:08:24.121 16636.062 - 16736.886: 97.5844% ( 40) 00:08:24.121 16736.886 - 16837.711: 97.8538% ( 30) 00:08:24.121 16837.711 - 16938.535: 98.1681% ( 35) 00:08:24.121 16938.535 - 17039.360: 98.3657% ( 22) 00:08:24.121 17039.360 - 17140.185: 98.5453% ( 20) 00:08:24.121 17140.185 - 17241.009: 98.6530% ( 12) 00:08:24.121 17241.009 - 17341.834: 98.7249% ( 8) 00:08:24.121 17341.834 - 17442.658: 98.7967% ( 8) 00:08:24.121 17442.658 - 17543.483: 98.8416% ( 5) 00:08:24.121 17543.483 - 17644.308: 98.8506% ( 1) 00:08:24.121 19761.625 - 19862.449: 98.8596% ( 1) 00:08:24.121 19862.449 - 19963.274: 98.8685% ( 1) 00:08:24.121 19963.274 - 20064.098: 98.8955% ( 3) 00:08:24.121 20064.098 - 20164.923: 98.9404% ( 5) 00:08:24.121 20164.923 - 20265.748: 98.9853% ( 5) 00:08:24.121 20265.748 - 20366.572: 99.0392% ( 6) 00:08:24.121 20366.572 - 20467.397: 99.0841% ( 5) 00:08:24.121 20467.397 - 20568.222: 99.1290% ( 5) 00:08:24.121 20568.222 - 20669.046: 99.1739% ( 5) 00:08:24.121 20669.046 - 20769.871: 99.2367% ( 7) 00:08:24.121 20769.871 - 20870.695: 99.2906% ( 6) 00:08:24.121 20870.695 - 20971.520: 99.3265% ( 4) 00:08:24.121 20971.520 - 21072.345: 99.3714% ( 5) 00:08:24.121 21072.345 - 21173.169: 99.4163% ( 5) 00:08:24.121 21173.169 - 21273.994: 99.4253% ( 1) 00:08:24.121 25206.154 - 25306.978: 99.4343% ( 1) 00:08:24.121 25306.978 - 25407.803: 99.4792% ( 5) 00:08:24.121 25407.803 - 25508.628: 99.5061% ( 3) 00:08:24.121 25508.628 - 25609.452: 99.5510% ( 5) 00:08:24.121 25609.452 - 25710.277: 99.5869% ( 4) 00:08:24.121 25710.277 - 25811.102: 99.6228% ( 4) 00:08:24.121 25811.102 - 26012.751: 99.7126% ( 10) 00:08:24.121 26012.751 - 26214.400: 99.8024% ( 10) 00:08:24.121 26214.400 - 26416.049: 99.9012% ( 11) 00:08:24.121 26416.049 - 26617.698: 99.9820% ( 9) 00:08:24.121 26617.698 - 26819.348: 100.0000% ( 2) 00:08:24.121 00:08:24.121 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:24.121 ============================================================================== 00:08:24.121 Range in us Cumulative IO count 00:08:24.121 5494.942 - 5520.148: 0.0090% ( 1) 00:08:24.121 5520.148 - 5545.354: 0.0180% ( 1) 00:08:24.121 5545.354 - 5570.560: 0.0359% ( 2) 00:08:24.121 5570.560 - 5595.766: 0.0808% ( 5) 00:08:24.121 5595.766 - 5620.972: 0.1257% ( 5) 00:08:24.121 5620.972 - 5646.178: 0.2065% ( 9) 00:08:24.121 5646.178 - 5671.385: 0.3233% ( 13) 00:08:24.121 5671.385 - 5696.591: 0.4580% ( 15) 00:08:24.121 5696.591 - 5721.797: 1.0686% ( 68) 00:08:24.121 5721.797 - 5747.003: 1.4547% ( 43) 00:08:24.121 5747.003 - 5772.209: 1.6074% ( 17) 00:08:24.121 5772.209 - 5797.415: 1.7421% ( 15) 00:08:24.121 5797.415 - 5822.622: 1.8409% ( 11) 00:08:24.121 5822.622 - 5847.828: 2.0564% ( 24) 00:08:24.121 5847.828 - 5873.034: 2.5144% ( 51) 00:08:24.121 5873.034 - 5898.240: 2.7299% ( 24) 00:08:24.121 5898.240 - 5923.446: 2.8376% ( 12) 00:08:24.121 5923.446 - 5948.652: 2.9364% ( 11) 00:08:24.121 5948.652 - 5973.858: 2.9993% ( 7) 00:08:24.121 5973.858 - 5999.065: 3.0621% ( 7) 00:08:24.121 5999.065 - 6024.271: 3.1609% ( 11) 00:08:24.121 6024.271 - 6049.477: 3.4034% ( 27) 00:08:24.121 6049.477 - 6074.683: 3.5201% ( 13) 00:08:24.121 6074.683 - 6099.889: 3.6279% ( 12) 00:08:24.121 6099.889 - 6125.095: 3.7716% ( 16) 00:08:24.121 6125.095 - 6150.302: 3.9511% ( 20) 00:08:24.121 6150.302 - 6175.508: 4.4001% ( 50) 00:08:24.121 6175.508 - 6200.714: 5.2712% ( 97) 00:08:24.121 6200.714 - 6225.920: 6.0165% ( 83) 00:08:24.121 6225.920 - 6251.126: 6.2500% ( 26) 00:08:24.121 6251.126 - 6276.332: 6.5374% ( 32) 00:08:24.121 6276.332 - 6301.538: 7.1390% ( 67) 00:08:24.121 6301.538 - 6326.745: 7.2917% ( 17) 00:08:24.121 6326.745 - 6351.951: 7.4892% ( 22) 00:08:24.121 6351.951 - 6377.157: 7.6958% ( 23) 00:08:24.121 6377.157 - 6402.363: 7.9113% ( 24) 00:08:24.121 6402.363 - 6427.569: 8.4142% ( 56) 00:08:24.121 6427.569 - 6452.775: 8.8452% ( 48) 00:08:24.121 6452.775 - 6503.188: 9.7432% ( 100) 00:08:24.121 6503.188 - 6553.600: 10.0665% ( 36) 00:08:24.121 6553.600 - 6604.012: 10.2371% ( 19) 00:08:24.121 6604.012 - 6654.425: 10.2999% ( 7) 00:08:24.121 6654.425 - 6704.837: 10.3448% ( 5) 00:08:24.121 7057.723 - 7108.135: 10.3538% ( 1) 00:08:24.121 7158.548 - 7208.960: 10.3718% ( 2) 00:08:24.121 7208.960 - 7259.372: 10.5065% ( 15) 00:08:24.121 7259.372 - 7309.785: 10.6771% ( 19) 00:08:24.121 7309.785 - 7360.197: 10.9914% ( 35) 00:08:24.121 7360.197 - 7410.609: 11.3596% ( 41) 00:08:24.121 7410.609 - 7461.022: 11.9253% ( 63) 00:08:24.121 7461.022 - 7511.434: 12.2306% ( 34) 00:08:24.121 7511.434 - 7561.846: 12.4551% ( 25) 00:08:24.121 7561.846 - 7612.258: 12.6527% ( 22) 00:08:24.121 7612.258 - 7662.671: 12.7065% ( 6) 00:08:24.121 7662.671 - 7713.083: 12.7514% ( 5) 00:08:24.121 7713.083 - 7763.495: 12.8143% ( 7) 00:08:24.121 7763.495 - 7813.908: 12.8772% ( 7) 00:08:24.121 7813.908 - 7864.320: 12.9849% ( 12) 00:08:24.121 7864.320 - 7914.732: 13.0478% ( 7) 00:08:24.121 7914.732 - 7965.145: 13.1555% ( 12) 00:08:24.121 7965.145 - 8015.557: 13.2902% ( 15) 00:08:24.121 8015.557 - 8065.969: 13.3890% ( 11) 00:08:24.121 8065.969 - 8116.382: 13.5776% ( 21) 00:08:24.121 8116.382 - 8166.794: 14.0625% ( 54) 00:08:24.121 8166.794 - 8217.206: 14.2152% ( 17) 00:08:24.121 8217.206 - 8267.618: 14.2511% ( 4) 00:08:24.121 8267.618 - 8318.031: 14.2780% ( 3) 00:08:24.121 8318.031 - 8368.443: 14.3588% ( 9) 00:08:24.121 8368.443 - 8418.855: 14.5025% ( 16) 00:08:24.121 8418.855 - 8469.268: 14.6552% ( 17) 00:08:24.121 8469.268 - 8519.680: 14.9335% ( 31) 00:08:24.121 8519.680 - 8570.092: 15.0323% ( 11) 00:08:24.121 8570.092 - 8620.505: 15.1042% ( 8) 00:08:24.121 8620.505 - 8670.917: 15.2029% ( 11) 00:08:24.121 8670.917 - 8721.329: 15.3107% ( 12) 00:08:24.121 8721.329 - 8771.742: 15.4274% ( 13) 00:08:24.121 8771.742 - 8822.154: 15.5891% ( 18) 00:08:24.121 8822.154 - 8872.566: 15.7417% ( 17) 00:08:24.121 8872.566 - 8922.978: 15.9303% ( 21) 00:08:24.121 8922.978 - 8973.391: 16.1728% ( 27) 00:08:24.121 8973.391 - 9023.803: 16.4871% ( 35) 00:08:24.121 9023.803 - 9074.215: 16.8373% ( 39) 00:08:24.121 9074.215 - 9124.628: 17.1336% ( 33) 00:08:24.121 9124.628 - 9175.040: 17.5287% ( 44) 00:08:24.121 9175.040 - 9225.452: 17.8520% ( 36) 00:08:24.121 9225.452 - 9275.865: 18.1304% ( 31) 00:08:24.121 9275.865 - 9326.277: 18.3639% ( 26) 00:08:24.121 9326.277 - 9376.689: 18.6692% ( 34) 00:08:24.121 9376.689 - 9427.102: 18.9476% ( 31) 00:08:24.121 9427.102 - 9477.514: 19.4594% ( 57) 00:08:24.121 9477.514 - 9527.926: 19.7108% ( 28) 00:08:24.121 9527.926 - 9578.338: 20.0072% ( 33) 00:08:24.121 9578.338 - 9628.751: 20.2317% ( 25) 00:08:24.121 9628.751 - 9679.163: 20.5819% ( 39) 00:08:24.121 9679.163 - 9729.575: 20.9142% ( 37) 00:08:24.121 9729.575 - 9779.988: 21.3991% ( 54) 00:08:24.121 9779.988 - 9830.400: 21.7672% ( 41) 00:08:24.121 9830.400 - 9880.812: 22.2971% ( 59) 00:08:24.121 9880.812 - 9931.225: 22.8807% ( 65) 00:08:24.121 9931.225 - 9981.637: 23.7967% ( 102) 00:08:24.121 9981.637 - 10032.049: 25.1167% ( 147) 00:08:24.121 10032.049 - 10082.462: 26.2931% ( 131) 00:08:24.121 10082.462 - 10132.874: 27.3976% ( 123) 00:08:24.121 10132.874 - 10183.286: 28.5201% ( 125) 00:08:24.121 10183.286 - 10233.698: 29.6067% ( 121) 00:08:24.121 10233.698 - 10284.111: 30.8010% ( 133) 00:08:24.121 10284.111 - 10334.523: 31.9684% ( 130) 00:08:24.121 10334.523 - 10384.935: 33.1717% ( 134) 00:08:24.121 10384.935 - 10435.348: 34.1954% ( 114) 00:08:24.121 10435.348 - 10485.760: 35.2460% ( 117) 00:08:24.121 10485.760 - 10536.172: 36.2877% ( 116) 00:08:24.121 10536.172 - 10586.585: 37.2486% ( 107) 00:08:24.121 10586.585 - 10636.997: 38.0837% ( 93) 00:08:24.121 10636.997 - 10687.409: 39.0894% ( 112) 00:08:24.121 10687.409 - 10737.822: 40.2119% ( 125) 00:08:24.121 10737.822 - 10788.234: 41.1458% ( 104) 00:08:24.121 10788.234 - 10838.646: 41.8822% ( 82) 00:08:24.121 10838.646 - 10889.058: 42.7892% ( 101) 00:08:24.122 10889.058 - 10939.471: 43.4806% ( 77) 00:08:24.122 10939.471 - 10989.883: 44.1631% ( 76) 00:08:24.122 10989.883 - 11040.295: 45.0521% ( 99) 00:08:24.122 11040.295 - 11090.708: 46.0309% ( 109) 00:08:24.122 11090.708 - 11141.120: 46.9828% ( 106) 00:08:24.122 11141.120 - 11191.532: 47.7011% ( 80) 00:08:24.122 11191.532 - 11241.945: 48.5273% ( 92) 00:08:24.122 11241.945 - 11292.357: 49.1649% ( 71) 00:08:24.122 11292.357 - 11342.769: 49.7396% ( 64) 00:08:24.122 11342.769 - 11393.182: 50.3143% ( 64) 00:08:24.122 11393.182 - 11443.594: 50.8261% ( 57) 00:08:24.122 11443.594 - 11494.006: 51.2662% ( 49) 00:08:24.122 11494.006 - 11544.418: 51.8588% ( 66) 00:08:24.122 11544.418 - 11594.831: 52.3886% ( 59) 00:08:24.122 11594.831 - 11645.243: 53.0711% ( 76) 00:08:24.122 11645.243 - 11695.655: 53.7805% ( 79) 00:08:24.122 11695.655 - 11746.068: 54.4989% ( 80) 00:08:24.122 11746.068 - 11796.480: 55.1185% ( 69) 00:08:24.122 11796.480 - 11846.892: 55.7112% ( 66) 00:08:24.122 11846.892 - 11897.305: 56.3129% ( 67) 00:08:24.122 11897.305 - 11947.717: 57.1031% ( 88) 00:08:24.122 11947.717 - 11998.129: 57.8215% ( 80) 00:08:24.122 11998.129 - 12048.542: 58.5040% ( 76) 00:08:24.122 12048.542 - 12098.954: 59.1505% ( 72) 00:08:24.122 12098.954 - 12149.366: 59.7432% ( 66) 00:08:24.122 12149.366 - 12199.778: 60.3628% ( 69) 00:08:24.122 12199.778 - 12250.191: 60.9016% ( 60) 00:08:24.122 12250.191 - 12300.603: 61.5032% ( 67) 00:08:24.122 12300.603 - 12351.015: 62.2216% ( 80) 00:08:24.122 12351.015 - 12401.428: 62.8682% ( 72) 00:08:24.122 12401.428 - 12451.840: 63.4878% ( 69) 00:08:24.122 12451.840 - 12502.252: 64.1074% ( 69) 00:08:24.122 12502.252 - 12552.665: 64.6462% ( 60) 00:08:24.122 12552.665 - 12603.077: 65.1042% ( 51) 00:08:24.122 12603.077 - 12653.489: 65.5801% ( 53) 00:08:24.122 12653.489 - 12703.902: 65.9932% ( 46) 00:08:24.122 12703.902 - 12754.314: 66.5140% ( 58) 00:08:24.122 12754.314 - 12804.726: 67.0618% ( 61) 00:08:24.122 12804.726 - 12855.138: 67.6185% ( 62) 00:08:24.122 12855.138 - 12905.551: 68.1753% ( 62) 00:08:24.122 12905.551 - 13006.375: 69.2978% ( 125) 00:08:24.122 13006.375 - 13107.200: 70.5999% ( 145) 00:08:24.122 13107.200 - 13208.025: 72.2162% ( 180) 00:08:24.122 13208.025 - 13308.849: 73.8955% ( 187) 00:08:24.122 13308.849 - 13409.674: 75.1616% ( 141) 00:08:24.122 13409.674 - 13510.498: 76.4188% ( 140) 00:08:24.122 13510.498 - 13611.323: 77.7838% ( 152) 00:08:24.122 13611.323 - 13712.148: 79.1846% ( 156) 00:08:24.122 13712.148 - 13812.972: 80.6394% ( 162) 00:08:24.122 13812.972 - 13913.797: 81.9684% ( 148) 00:08:24.122 13913.797 - 14014.622: 83.0370% ( 119) 00:08:24.122 14014.622 - 14115.446: 84.1146% ( 120) 00:08:24.122 14115.446 - 14216.271: 84.8330% ( 80) 00:08:24.122 14216.271 - 14317.095: 85.6681% ( 93) 00:08:24.122 14317.095 - 14417.920: 86.5302% ( 96) 00:08:24.122 14417.920 - 14518.745: 87.2216% ( 77) 00:08:24.122 14518.745 - 14619.569: 87.7335% ( 57) 00:08:24.122 14619.569 - 14720.394: 88.1196% ( 43) 00:08:24.122 14720.394 - 14821.218: 88.4698% ( 39) 00:08:24.122 14821.218 - 14922.043: 88.9458% ( 53) 00:08:24.122 14922.043 - 15022.868: 89.4397% ( 55) 00:08:24.122 15022.868 - 15123.692: 89.8976% ( 51) 00:08:24.122 15123.692 - 15224.517: 90.3825% ( 54) 00:08:24.122 15224.517 - 15325.342: 90.8764% ( 55) 00:08:24.122 15325.342 - 15426.166: 91.2087% ( 37) 00:08:24.122 15426.166 - 15526.991: 91.5679% ( 40) 00:08:24.122 15526.991 - 15627.815: 91.9899% ( 47) 00:08:24.122 15627.815 - 15728.640: 92.3132% ( 36) 00:08:24.122 15728.640 - 15829.465: 92.6904% ( 42) 00:08:24.122 15829.465 - 15930.289: 93.2471% ( 62) 00:08:24.122 15930.289 - 16031.114: 93.7410% ( 55) 00:08:24.122 16031.114 - 16131.938: 94.2619% ( 58) 00:08:24.122 16131.938 - 16232.763: 94.7737% ( 57) 00:08:24.122 16232.763 - 16333.588: 95.1778% ( 45) 00:08:24.122 16333.588 - 16434.412: 95.6178% ( 49) 00:08:24.122 16434.412 - 16535.237: 96.0219% ( 45) 00:08:24.122 16535.237 - 16636.062: 96.3991% ( 42) 00:08:24.122 16636.062 - 16736.886: 96.8121% ( 46) 00:08:24.122 16736.886 - 16837.711: 97.1803% ( 41) 00:08:24.122 16837.711 - 16938.535: 97.4856% ( 34) 00:08:24.122 16938.535 - 17039.360: 97.7820% ( 33) 00:08:24.122 17039.360 - 17140.185: 97.9885% ( 23) 00:08:24.122 17140.185 - 17241.009: 98.1681% ( 20) 00:08:24.122 17241.009 - 17341.834: 98.3208% ( 17) 00:08:24.122 17341.834 - 17442.658: 98.4465% ( 14) 00:08:24.122 17442.658 - 17543.483: 98.5632% ( 13) 00:08:24.122 17543.483 - 17644.308: 98.6620% ( 11) 00:08:24.122 17644.308 - 17745.132: 98.7518% ( 10) 00:08:24.122 17745.132 - 17845.957: 98.8236% ( 8) 00:08:24.122 17845.957 - 17946.782: 98.8506% ( 3) 00:08:24.122 19862.449 - 19963.274: 98.8685% ( 2) 00:08:24.122 19963.274 - 20064.098: 98.9494% ( 9) 00:08:24.122 20064.098 - 20164.923: 99.0032% ( 6) 00:08:24.122 20164.923 - 20265.748: 99.0481% ( 5) 00:08:24.122 20265.748 - 20366.572: 99.0930% ( 5) 00:08:24.122 20366.572 - 20467.397: 99.1559% ( 7) 00:08:24.122 20467.397 - 20568.222: 99.2098% ( 6) 00:08:24.122 20568.222 - 20669.046: 99.2636% ( 6) 00:08:24.122 20669.046 - 20769.871: 99.3175% ( 6) 00:08:24.122 20769.871 - 20870.695: 99.3714% ( 6) 00:08:24.122 20870.695 - 20971.520: 99.4163% ( 5) 00:08:24.122 20971.520 - 21072.345: 99.4253% ( 1) 00:08:24.122 24802.855 - 24903.680: 99.4343% ( 1) 00:08:24.122 24903.680 - 25004.505: 99.4432% ( 1) 00:08:24.122 25206.154 - 25306.978: 99.4612% ( 2) 00:08:24.122 25306.978 - 25407.803: 99.5241% ( 7) 00:08:24.122 25407.803 - 25508.628: 99.5690% ( 5) 00:08:24.122 25508.628 - 25609.452: 99.6228% ( 6) 00:08:24.122 25609.452 - 25710.277: 99.6677% ( 5) 00:08:24.122 25710.277 - 25811.102: 99.7216% ( 6) 00:08:24.122 25811.102 - 26012.751: 99.8204% ( 11) 00:08:24.122 26012.751 - 26214.400: 99.9102% ( 10) 00:08:24.122 26214.400 - 26416.049: 100.0000% ( 10) 00:08:24.122 00:08:24.122 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:24.122 ============================================================================== 00:08:24.122 Range in us Cumulative IO count 00:08:24.122 5394.117 - 5419.323: 0.0090% ( 1) 00:08:24.122 5419.323 - 5444.529: 0.0269% ( 2) 00:08:24.122 5469.735 - 5494.942: 0.0449% ( 2) 00:08:24.122 5494.942 - 5520.148: 0.0808% ( 4) 00:08:24.122 5520.148 - 5545.354: 0.1078% ( 3) 00:08:24.122 5545.354 - 5570.560: 0.1796% ( 8) 00:08:24.122 5570.560 - 5595.766: 0.2425% ( 7) 00:08:24.122 5595.766 - 5620.972: 0.3502% ( 12) 00:08:24.122 5620.972 - 5646.178: 0.5837% ( 26) 00:08:24.122 5646.178 - 5671.385: 0.7274% ( 16) 00:08:24.122 5671.385 - 5696.591: 0.8621% ( 15) 00:08:24.122 5696.591 - 5721.797: 1.0686% ( 23) 00:08:24.122 5721.797 - 5747.003: 1.3829% ( 35) 00:08:24.122 5747.003 - 5772.209: 1.6523% ( 30) 00:08:24.122 5772.209 - 5797.415: 2.0025% ( 39) 00:08:24.122 5797.415 - 5822.622: 2.1642% ( 18) 00:08:24.122 5822.622 - 5847.828: 2.4784% ( 35) 00:08:24.122 5847.828 - 5873.034: 2.6401% ( 18) 00:08:24.122 5873.034 - 5898.240: 2.7748% ( 15) 00:08:24.122 5898.240 - 5923.446: 2.9005% ( 14) 00:08:24.122 5923.446 - 5948.652: 3.0262% ( 14) 00:08:24.122 5948.652 - 5973.858: 3.0981% ( 8) 00:08:24.122 5973.858 - 5999.065: 3.3226% ( 25) 00:08:24.122 5999.065 - 6024.271: 3.4124% ( 10) 00:08:24.122 6024.271 - 6049.477: 3.5201% ( 12) 00:08:24.122 6049.477 - 6074.683: 3.6099% ( 10) 00:08:24.122 6074.683 - 6099.889: 3.6818% ( 8) 00:08:24.122 6099.889 - 6125.095: 3.9601% ( 31) 00:08:24.122 6125.095 - 6150.302: 4.0948% ( 15) 00:08:24.122 6150.302 - 6175.508: 4.2475% ( 17) 00:08:24.122 6175.508 - 6200.714: 4.4181% ( 19) 00:08:24.122 6200.714 - 6225.920: 4.5977% ( 20) 00:08:24.122 6225.920 - 6251.126: 4.8671% ( 30) 00:08:24.122 6251.126 - 6276.332: 5.5585% ( 77) 00:08:24.122 6276.332 - 6301.538: 5.9626% ( 45) 00:08:24.122 6301.538 - 6326.745: 6.4655% ( 56) 00:08:24.122 6326.745 - 6351.951: 6.8068% ( 38) 00:08:24.122 6351.951 - 6377.157: 7.1031% ( 33) 00:08:24.122 6377.157 - 6402.363: 7.5970% ( 55) 00:08:24.122 6402.363 - 6427.569: 8.0729% ( 53) 00:08:24.122 6427.569 - 6452.775: 8.3333% ( 29) 00:08:24.122 6452.775 - 6503.188: 8.8991% ( 63) 00:08:24.122 6503.188 - 6553.600: 9.3391% ( 49) 00:08:24.122 6553.600 - 6604.012: 9.8240% ( 54) 00:08:24.122 6604.012 - 6654.425: 10.0305% ( 23) 00:08:24.122 6654.425 - 6704.837: 10.3807% ( 39) 00:08:24.122 6704.837 - 6755.249: 10.6142% ( 26) 00:08:24.122 6755.249 - 6805.662: 10.7489% ( 15) 00:08:24.122 6805.662 - 6856.074: 10.8387% ( 10) 00:08:24.122 6856.074 - 6906.486: 10.8926% ( 6) 00:08:24.122 6906.486 - 6956.898: 10.9195% ( 3) 00:08:24.122 7007.311 - 7057.723: 10.9285% ( 1) 00:08:24.122 7158.548 - 7208.960: 10.9824% ( 6) 00:08:24.122 7208.960 - 7259.372: 11.0542% ( 8) 00:08:24.122 7259.372 - 7309.785: 11.1710% ( 13) 00:08:24.122 7309.785 - 7360.197: 11.3775% ( 23) 00:08:24.122 7360.197 - 7410.609: 11.5032% ( 14) 00:08:24.122 7410.609 - 7461.022: 11.6379% ( 15) 00:08:24.122 7461.022 - 7511.434: 11.7367% ( 11) 00:08:24.122 7511.434 - 7561.846: 11.9253% ( 21) 00:08:24.123 7561.846 - 7612.258: 12.0241% ( 11) 00:08:24.123 7612.258 - 7662.671: 12.1498% ( 14) 00:08:24.123 7662.671 - 7713.083: 12.2755% ( 14) 00:08:24.123 7713.083 - 7763.495: 12.4731% ( 22) 00:08:24.123 7763.495 - 7813.908: 12.6616% ( 21) 00:08:24.123 7813.908 - 7864.320: 12.8682% ( 23) 00:08:24.123 7864.320 - 7914.732: 13.2004% ( 37) 00:08:24.123 7914.732 - 7965.145: 13.5147% ( 35) 00:08:24.123 7965.145 - 8015.557: 13.6584% ( 16) 00:08:24.123 8015.557 - 8065.969: 13.7572% ( 11) 00:08:24.123 8065.969 - 8116.382: 13.7841% ( 3) 00:08:24.123 8116.382 - 8166.794: 13.7931% ( 1) 00:08:24.123 8217.206 - 8267.618: 13.8021% ( 1) 00:08:24.123 8267.618 - 8318.031: 13.8111% ( 1) 00:08:24.123 8318.031 - 8368.443: 13.8470% ( 4) 00:08:24.123 8368.443 - 8418.855: 13.8739% ( 3) 00:08:24.123 8418.855 - 8469.268: 13.9188% ( 5) 00:08:24.123 8469.268 - 8519.680: 13.9727% ( 6) 00:08:24.123 8519.680 - 8570.092: 14.1523% ( 20) 00:08:24.123 8570.092 - 8620.505: 14.2960% ( 16) 00:08:24.123 8620.505 - 8670.917: 14.4037% ( 12) 00:08:24.123 8670.917 - 8721.329: 14.5833% ( 20) 00:08:24.123 8721.329 - 8771.742: 14.9515% ( 41) 00:08:24.123 8771.742 - 8822.154: 15.3287% ( 42) 00:08:24.123 8822.154 - 8872.566: 15.6699% ( 38) 00:08:24.123 8872.566 - 8922.978: 16.0201% ( 39) 00:08:24.123 8922.978 - 8973.391: 16.3524% ( 37) 00:08:24.123 8973.391 - 9023.803: 16.6756% ( 36) 00:08:24.123 9023.803 - 9074.215: 17.1336% ( 51) 00:08:24.123 9074.215 - 9124.628: 17.6545% ( 58) 00:08:24.123 9124.628 - 9175.040: 18.0406% ( 43) 00:08:24.123 9175.040 - 9225.452: 18.4267% ( 43) 00:08:24.123 9225.452 - 9275.865: 18.8488% ( 47) 00:08:24.123 9275.865 - 9326.277: 19.2529% ( 45) 00:08:24.123 9326.277 - 9376.689: 19.6121% ( 40) 00:08:24.123 9376.689 - 9427.102: 19.8545% ( 27) 00:08:24.123 9427.102 - 9477.514: 20.1688% ( 35) 00:08:24.123 9477.514 - 9527.926: 20.8154% ( 72) 00:08:24.123 9527.926 - 9578.338: 21.1566% ( 38) 00:08:24.123 9578.338 - 9628.751: 21.6236% ( 52) 00:08:24.123 9628.751 - 9679.163: 22.0007% ( 42) 00:08:24.123 9679.163 - 9729.575: 22.4497% ( 50) 00:08:24.123 9729.575 - 9779.988: 22.8718% ( 47) 00:08:24.123 9779.988 - 9830.400: 23.2489% ( 42) 00:08:24.123 9830.400 - 9880.812: 23.7518% ( 56) 00:08:24.123 9880.812 - 9931.225: 24.5151% ( 85) 00:08:24.123 9931.225 - 9981.637: 25.2155% ( 78) 00:08:24.123 9981.637 - 10032.049: 25.7812% ( 63) 00:08:24.123 10032.049 - 10082.462: 26.4368% ( 73) 00:08:24.123 10082.462 - 10132.874: 27.3707% ( 104) 00:08:24.123 10132.874 - 10183.286: 28.4303% ( 118) 00:08:24.123 10183.286 - 10233.698: 29.5797% ( 128) 00:08:24.123 10233.698 - 10284.111: 30.4957% ( 102) 00:08:24.123 10284.111 - 10334.523: 31.4565% ( 107) 00:08:24.123 10334.523 - 10384.935: 32.5341% ( 120) 00:08:24.123 10384.935 - 10435.348: 33.6027% ( 119) 00:08:24.123 10435.348 - 10485.760: 34.4738% ( 97) 00:08:24.123 10485.760 - 10536.172: 35.3807% ( 101) 00:08:24.123 10536.172 - 10586.585: 36.3326% ( 106) 00:08:24.123 10586.585 - 10636.997: 37.2575% ( 103) 00:08:24.123 10636.997 - 10687.409: 38.3172% ( 118) 00:08:24.123 10687.409 - 10737.822: 39.2152% ( 100) 00:08:24.123 10737.822 - 10788.234: 40.3017% ( 121) 00:08:24.123 10788.234 - 10838.646: 41.1638% ( 96) 00:08:24.123 10838.646 - 10889.058: 42.1516% ( 110) 00:08:24.123 10889.058 - 10939.471: 43.3549% ( 134) 00:08:24.123 10939.471 - 10989.883: 44.1092% ( 84) 00:08:24.123 10989.883 - 11040.295: 45.0162% ( 101) 00:08:24.123 11040.295 - 11090.708: 45.5999% ( 65) 00:08:24.123 11090.708 - 11141.120: 46.1656% ( 63) 00:08:24.123 11141.120 - 11191.532: 46.8660% ( 78) 00:08:24.123 11191.532 - 11241.945: 47.7909% ( 103) 00:08:24.123 11241.945 - 11292.357: 48.6171% ( 92) 00:08:24.123 11292.357 - 11342.769: 49.2098% ( 66) 00:08:24.123 11342.769 - 11393.182: 49.7845% ( 64) 00:08:24.123 11393.182 - 11443.594: 50.2963% ( 57) 00:08:24.123 11443.594 - 11494.006: 50.8351% ( 60) 00:08:24.123 11494.006 - 11544.418: 51.4996% ( 74) 00:08:24.123 11544.418 - 11594.831: 52.3976% ( 100) 00:08:24.123 11594.831 - 11645.243: 53.0891% ( 77) 00:08:24.123 11645.243 - 11695.655: 53.7356% ( 72) 00:08:24.123 11695.655 - 11746.068: 54.2026% ( 52) 00:08:24.123 11746.068 - 11796.480: 54.7055% ( 56) 00:08:24.123 11796.480 - 11846.892: 55.2981% ( 66) 00:08:24.123 11846.892 - 11897.305: 55.8908% ( 66) 00:08:24.123 11897.305 - 11947.717: 56.5643% ( 75) 00:08:24.123 11947.717 - 11998.129: 57.2737% ( 79) 00:08:24.123 11998.129 - 12048.542: 57.8933% ( 69) 00:08:24.123 12048.542 - 12098.954: 58.6027% ( 79) 00:08:24.123 12098.954 - 12149.366: 59.0787% ( 53) 00:08:24.123 12149.366 - 12199.778: 59.4738% ( 44) 00:08:24.123 12199.778 - 12250.191: 59.8958% ( 47) 00:08:24.123 12250.191 - 12300.603: 60.4975% ( 67) 00:08:24.123 12300.603 - 12351.015: 61.1710% ( 75) 00:08:24.123 12351.015 - 12401.428: 61.8265% ( 73) 00:08:24.123 12401.428 - 12451.840: 62.8412% ( 113) 00:08:24.123 12451.840 - 12502.252: 63.7931% ( 106) 00:08:24.123 12502.252 - 12552.665: 64.5833% ( 88) 00:08:24.123 12552.665 - 12603.077: 65.4185% ( 93) 00:08:24.123 12603.077 - 12653.489: 66.2356% ( 91) 00:08:24.123 12653.489 - 12703.902: 66.9540% ( 80) 00:08:24.123 12703.902 - 12754.314: 67.6455% ( 77) 00:08:24.123 12754.314 - 12804.726: 68.5165% ( 97) 00:08:24.123 12804.726 - 12855.138: 69.1361% ( 69) 00:08:24.123 12855.138 - 12905.551: 69.5941% ( 51) 00:08:24.123 12905.551 - 13006.375: 70.6717% ( 120) 00:08:24.123 13006.375 - 13107.200: 71.9648% ( 144) 00:08:24.123 13107.200 - 13208.025: 73.5542% ( 177) 00:08:24.123 13208.025 - 13308.849: 74.9910% ( 160) 00:08:24.123 13308.849 - 13409.674: 76.4009% ( 157) 00:08:24.123 13409.674 - 13510.498: 77.7927% ( 155) 00:08:24.123 13510.498 - 13611.323: 79.2565% ( 163) 00:08:24.123 13611.323 - 13712.148: 80.3341% ( 120) 00:08:24.123 13712.148 - 13812.972: 81.0255% ( 77) 00:08:24.123 13812.972 - 13913.797: 81.6721% ( 72) 00:08:24.123 13913.797 - 14014.622: 82.5880% ( 102) 00:08:24.123 14014.622 - 14115.446: 83.6566% ( 119) 00:08:24.123 14115.446 - 14216.271: 84.5905% ( 104) 00:08:24.123 14216.271 - 14317.095: 85.4436% ( 95) 00:08:24.123 14317.095 - 14417.920: 86.0453% ( 67) 00:08:24.123 14417.920 - 14518.745: 86.8894% ( 94) 00:08:24.123 14518.745 - 14619.569: 87.2935% ( 45) 00:08:24.123 14619.569 - 14720.394: 87.6616% ( 41) 00:08:24.123 14720.394 - 14821.218: 88.1017% ( 49) 00:08:24.123 14821.218 - 14922.043: 88.5147% ( 46) 00:08:24.123 14922.043 - 15022.868: 89.0086% ( 55) 00:08:24.123 15022.868 - 15123.692: 89.5474% ( 60) 00:08:24.123 15123.692 - 15224.517: 90.0772% ( 59) 00:08:24.123 15224.517 - 15325.342: 90.5801% ( 56) 00:08:24.123 15325.342 - 15426.166: 91.0291% ( 50) 00:08:24.123 15426.166 - 15526.991: 91.5499% ( 58) 00:08:24.123 15526.991 - 15627.815: 91.9630% ( 46) 00:08:24.123 15627.815 - 15728.640: 92.4928% ( 59) 00:08:24.123 15728.640 - 15829.465: 93.0316% ( 60) 00:08:24.123 15829.465 - 15930.289: 93.6422% ( 68) 00:08:24.123 15930.289 - 16031.114: 94.1451% ( 56) 00:08:24.123 16031.114 - 16131.938: 94.6121% ( 52) 00:08:24.123 16131.938 - 16232.763: 95.0700% ( 51) 00:08:24.123 16232.763 - 16333.588: 95.4562% ( 43) 00:08:24.123 16333.588 - 16434.412: 95.8064% ( 39) 00:08:24.123 16434.412 - 16535.237: 96.1835% ( 42) 00:08:24.123 16535.237 - 16636.062: 96.6415% ( 51) 00:08:24.123 16636.062 - 16736.886: 97.0815% ( 49) 00:08:24.123 16736.886 - 16837.711: 97.4138% ( 37) 00:08:24.123 16837.711 - 16938.535: 97.7909% ( 42) 00:08:24.123 16938.535 - 17039.360: 98.0334% ( 27) 00:08:24.123 17039.360 - 17140.185: 98.2130% ( 20) 00:08:24.123 17140.185 - 17241.009: 98.4555% ( 27) 00:08:24.123 17241.009 - 17341.834: 98.6081% ( 17) 00:08:24.123 17341.834 - 17442.658: 98.7069% ( 11) 00:08:24.123 17442.658 - 17543.483: 98.7877% ( 9) 00:08:24.123 17543.483 - 17644.308: 98.8416% ( 6) 00:08:24.123 17644.308 - 17745.132: 98.8506% ( 1) 00:08:24.123 19862.449 - 19963.274: 98.8596% ( 1) 00:08:24.124 20064.098 - 20164.923: 98.9134% ( 6) 00:08:24.124 20164.923 - 20265.748: 98.9763% ( 7) 00:08:24.124 20265.748 - 20366.572: 99.0392% ( 7) 00:08:24.124 20366.572 - 20467.397: 99.1020% ( 7) 00:08:24.124 20467.397 - 20568.222: 99.1828% ( 9) 00:08:24.124 20568.222 - 20669.046: 99.2367% ( 6) 00:08:24.124 20669.046 - 20769.871: 99.2906% ( 6) 00:08:24.124 20769.871 - 20870.695: 99.3445% ( 6) 00:08:24.124 20870.695 - 20971.520: 99.3983% ( 6) 00:08:24.124 20971.520 - 21072.345: 99.4253% ( 3) 00:08:24.124 25206.154 - 25306.978: 99.4343% ( 1) 00:08:24.124 25306.978 - 25407.803: 99.4432% ( 1) 00:08:24.124 25407.803 - 25508.628: 99.4612% ( 2) 00:08:24.124 25508.628 - 25609.452: 99.6408% ( 20) 00:08:24.124 25609.452 - 25710.277: 99.7396% ( 11) 00:08:24.124 25710.277 - 25811.102: 99.7845% ( 5) 00:08:24.124 25811.102 - 26012.751: 99.8653% ( 9) 00:08:24.124 26012.751 - 26214.400: 99.9641% ( 11) 00:08:24.124 26214.400 - 26416.049: 100.0000% ( 4) 00:08:24.124 00:08:24.124 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:24.124 ============================================================================== 00:08:24.124 Range in us Cumulative IO count 00:08:24.124 5293.292 - 5318.498: 0.0090% ( 1) 00:08:24.124 5343.705 - 5368.911: 0.0180% ( 1) 00:08:24.124 5419.323 - 5444.529: 0.0269% ( 1) 00:08:24.124 5444.529 - 5469.735: 0.0539% ( 3) 00:08:24.124 5469.735 - 5494.942: 0.0808% ( 3) 00:08:24.124 5494.942 - 5520.148: 0.1078% ( 3) 00:08:24.124 5520.148 - 5545.354: 0.1796% ( 8) 00:08:24.124 5545.354 - 5570.560: 0.2514% ( 8) 00:08:24.124 5570.560 - 5595.766: 0.3861% ( 15) 00:08:24.124 5595.766 - 5620.972: 0.5208% ( 15) 00:08:24.124 5620.972 - 5646.178: 0.7992% ( 31) 00:08:24.124 5646.178 - 5671.385: 1.0417% ( 27) 00:08:24.124 5671.385 - 5696.591: 1.4727% ( 48) 00:08:24.124 5696.591 - 5721.797: 1.8319% ( 40) 00:08:24.124 5721.797 - 5747.003: 2.0474% ( 24) 00:08:24.124 5747.003 - 5772.209: 2.2899% ( 27) 00:08:24.124 5772.209 - 5797.415: 2.4515% ( 18) 00:08:24.124 5797.415 - 5822.622: 2.7209% ( 30) 00:08:24.124 5822.622 - 5847.828: 3.0442% ( 36) 00:08:24.124 5847.828 - 5873.034: 3.1609% ( 13) 00:08:24.124 5873.034 - 5898.240: 3.2597% ( 11) 00:08:24.124 5898.240 - 5923.446: 3.3405% ( 9) 00:08:24.124 5923.446 - 5948.652: 3.4303% ( 10) 00:08:24.124 5948.652 - 5973.858: 3.5381% ( 12) 00:08:24.124 5973.858 - 5999.065: 3.6548% ( 13) 00:08:24.124 5999.065 - 6024.271: 3.8524% ( 22) 00:08:24.124 6024.271 - 6049.477: 4.2205% ( 41) 00:08:24.124 6049.477 - 6074.683: 4.4091% ( 21) 00:08:24.124 6074.683 - 6099.889: 4.7773% ( 41) 00:08:24.124 6099.889 - 6125.095: 4.9569% ( 20) 00:08:24.124 6125.095 - 6150.302: 5.2353% ( 31) 00:08:24.124 6150.302 - 6175.508: 5.4688% ( 26) 00:08:24.124 6175.508 - 6200.714: 5.9177% ( 50) 00:08:24.124 6200.714 - 6225.920: 6.2051% ( 32) 00:08:24.124 6225.920 - 6251.126: 6.4476% ( 27) 00:08:24.124 6251.126 - 6276.332: 6.8696% ( 47) 00:08:24.124 6276.332 - 6301.538: 7.3276% ( 51) 00:08:24.124 6301.538 - 6326.745: 7.5521% ( 25) 00:08:24.124 6326.745 - 6351.951: 7.7945% ( 27) 00:08:24.124 6351.951 - 6377.157: 8.3872% ( 66) 00:08:24.124 6377.157 - 6402.363: 8.6835% ( 33) 00:08:24.124 6402.363 - 6427.569: 8.9709% ( 32) 00:08:24.124 6427.569 - 6452.775: 9.1505% ( 20) 00:08:24.124 6452.775 - 6503.188: 9.4738% ( 36) 00:08:24.124 6503.188 - 6553.600: 9.6534% ( 20) 00:08:24.124 6553.600 - 6604.012: 9.7611% ( 12) 00:08:24.124 6604.012 - 6654.425: 9.8958% ( 15) 00:08:24.124 6654.425 - 6704.837: 10.0305% ( 15) 00:08:24.124 6704.837 - 6755.249: 10.3448% ( 35) 00:08:24.124 6755.249 - 6805.662: 10.5603% ( 24) 00:08:24.124 6805.662 - 6856.074: 11.0093% ( 50) 00:08:24.124 6856.074 - 6906.486: 11.2518% ( 27) 00:08:24.124 6906.486 - 6956.898: 11.3506% ( 11) 00:08:24.124 6956.898 - 7007.311: 11.4224% ( 8) 00:08:24.124 7007.311 - 7057.723: 11.4583% ( 4) 00:08:24.124 7057.723 - 7108.135: 11.4943% ( 4) 00:08:24.124 7108.135 - 7158.548: 11.5122% ( 2) 00:08:24.124 7158.548 - 7208.960: 11.5302% ( 2) 00:08:24.124 7208.960 - 7259.372: 11.5751% ( 5) 00:08:24.124 7259.372 - 7309.785: 11.6739% ( 11) 00:08:24.124 7309.785 - 7360.197: 11.7996% ( 14) 00:08:24.124 7360.197 - 7410.609: 12.0779% ( 31) 00:08:24.124 7410.609 - 7461.022: 12.3473% ( 30) 00:08:24.124 7461.022 - 7511.434: 12.4551% ( 12) 00:08:24.124 7511.434 - 7561.846: 12.5449% ( 10) 00:08:24.124 7561.846 - 7612.258: 12.6257% ( 9) 00:08:24.124 7612.258 - 7662.671: 12.6437% ( 2) 00:08:24.124 7813.908 - 7864.320: 12.6796% ( 4) 00:08:24.124 7864.320 - 7914.732: 12.7604% ( 9) 00:08:24.124 7914.732 - 7965.145: 12.8323% ( 8) 00:08:24.124 7965.145 - 8015.557: 13.0388% ( 23) 00:08:24.124 8015.557 - 8065.969: 13.1106% ( 8) 00:08:24.124 8065.969 - 8116.382: 13.1645% ( 6) 00:08:24.124 8116.382 - 8166.794: 13.2543% ( 10) 00:08:24.124 8166.794 - 8217.206: 13.3351% ( 9) 00:08:24.124 8217.206 - 8267.618: 13.4159% ( 9) 00:08:24.124 8267.618 - 8318.031: 13.6764% ( 29) 00:08:24.124 8318.031 - 8368.443: 13.7841% ( 12) 00:08:24.124 8368.443 - 8418.855: 13.9098% ( 14) 00:08:24.124 8418.855 - 8469.268: 14.0445% ( 15) 00:08:24.124 8469.268 - 8519.680: 14.2780% ( 26) 00:08:24.124 8519.680 - 8570.092: 14.3948% ( 13) 00:08:24.124 8570.092 - 8620.505: 14.4935% ( 11) 00:08:24.124 8620.505 - 8670.917: 14.6911% ( 22) 00:08:24.124 8670.917 - 8721.329: 15.0503% ( 40) 00:08:24.124 8721.329 - 8771.742: 15.4095% ( 40) 00:08:24.124 8771.742 - 8822.154: 15.6789% ( 30) 00:08:24.124 8822.154 - 8872.566: 16.1458% ( 52) 00:08:24.124 8872.566 - 8922.978: 16.4511% ( 34) 00:08:24.124 8922.978 - 8973.391: 16.8642% ( 46) 00:08:24.124 8973.391 - 9023.803: 17.3132% ( 50) 00:08:24.124 9023.803 - 9074.215: 17.7353% ( 47) 00:08:24.124 9074.215 - 9124.628: 18.2112% ( 53) 00:08:24.124 9124.628 - 9175.040: 18.7141% ( 56) 00:08:24.124 9175.040 - 9225.452: 18.9655% ( 28) 00:08:24.124 9225.452 - 9275.865: 19.4953% ( 59) 00:08:24.124 9275.865 - 9326.277: 19.7647% ( 30) 00:08:24.124 9326.277 - 9376.689: 20.0790% ( 35) 00:08:24.124 9376.689 - 9427.102: 20.4472% ( 41) 00:08:24.124 9427.102 - 9477.514: 20.6268% ( 20) 00:08:24.124 9477.514 - 9527.926: 20.8244% ( 22) 00:08:24.124 9527.926 - 9578.338: 21.0219% ( 22) 00:08:24.124 9578.338 - 9628.751: 21.4350% ( 46) 00:08:24.124 9628.751 - 9679.163: 21.7493% ( 35) 00:08:24.124 9679.163 - 9729.575: 22.0726% ( 36) 00:08:24.124 9729.575 - 9779.988: 22.6114% ( 60) 00:08:24.124 9779.988 - 9830.400: 23.1142% ( 56) 00:08:24.124 9830.400 - 9880.812: 23.7069% ( 66) 00:08:24.124 9880.812 - 9931.225: 24.4432% ( 82) 00:08:24.124 9931.225 - 9981.637: 25.0269% ( 65) 00:08:24.124 9981.637 - 10032.049: 25.6466% ( 69) 00:08:24.124 10032.049 - 10082.462: 26.3649% ( 80) 00:08:24.124 10082.462 - 10132.874: 27.1193% ( 84) 00:08:24.124 10132.874 - 10183.286: 27.9723% ( 95) 00:08:24.124 10183.286 - 10233.698: 29.0589% ( 121) 00:08:24.124 10233.698 - 10284.111: 30.1006% ( 116) 00:08:24.124 10284.111 - 10334.523: 31.2500% ( 128) 00:08:24.124 10334.523 - 10384.935: 32.4174% ( 130) 00:08:24.124 10384.935 - 10435.348: 33.6566% ( 138) 00:08:24.124 10435.348 - 10485.760: 34.6803% ( 114) 00:08:24.124 10485.760 - 10536.172: 35.7489% ( 119) 00:08:24.124 10536.172 - 10586.585: 36.6739% ( 103) 00:08:24.124 10586.585 - 10636.997: 37.6706% ( 111) 00:08:24.124 10636.997 - 10687.409: 38.5057% ( 93) 00:08:24.124 10687.409 - 10737.822: 39.3139% ( 90) 00:08:24.124 10737.822 - 10788.234: 40.0593% ( 83) 00:08:24.124 10788.234 - 10838.646: 40.9034% ( 94) 00:08:24.124 10838.646 - 10889.058: 41.9989% ( 122) 00:08:24.124 10889.058 - 10939.471: 42.6634% ( 74) 00:08:24.124 10939.471 - 10989.883: 43.3639% ( 78) 00:08:24.124 10989.883 - 11040.295: 43.9476% ( 65) 00:08:24.124 11040.295 - 11090.708: 44.5133% ( 63) 00:08:24.124 11090.708 - 11141.120: 44.9623% ( 50) 00:08:24.124 11141.120 - 11191.532: 45.4562% ( 55) 00:08:24.124 11191.532 - 11241.945: 45.9680% ( 57) 00:08:24.124 11241.945 - 11292.357: 46.7672% ( 89) 00:08:24.124 11292.357 - 11342.769: 47.6742% ( 101) 00:08:24.124 11342.769 - 11393.182: 48.3836% ( 79) 00:08:24.124 11393.182 - 11443.594: 49.1110% ( 81) 00:08:24.124 11443.594 - 11494.006: 49.9012% ( 88) 00:08:24.124 11494.006 - 11544.418: 50.6555% ( 84) 00:08:24.124 11544.418 - 11594.831: 51.7062% ( 117) 00:08:24.124 11594.831 - 11645.243: 52.3527% ( 72) 00:08:24.124 11645.243 - 11695.655: 52.9993% ( 72) 00:08:24.124 11695.655 - 11746.068: 53.4573% ( 51) 00:08:24.124 11746.068 - 11796.480: 54.0050% ( 61) 00:08:24.124 11796.480 - 11846.892: 54.6606% ( 73) 00:08:24.124 11846.892 - 11897.305: 55.3610% ( 78) 00:08:24.124 11897.305 - 11947.717: 56.0345% ( 75) 00:08:24.124 11947.717 - 11998.129: 56.5553% ( 58) 00:08:24.124 11998.129 - 12048.542: 56.9504% ( 44) 00:08:24.124 12048.542 - 12098.954: 57.4353% ( 54) 00:08:24.124 12098.954 - 12149.366: 58.0460% ( 68) 00:08:24.124 12149.366 - 12199.778: 58.6656% ( 69) 00:08:24.125 12199.778 - 12250.191: 59.2583% ( 66) 00:08:24.125 12250.191 - 12300.603: 60.0665% ( 90) 00:08:24.125 12300.603 - 12351.015: 60.6681% ( 67) 00:08:24.125 12351.015 - 12401.428: 61.1620% ( 55) 00:08:24.125 12401.428 - 12451.840: 61.6649% ( 56) 00:08:24.125 12451.840 - 12502.252: 62.2037% ( 60) 00:08:24.125 12502.252 - 12552.665: 62.6706% ( 52) 00:08:24.125 12552.665 - 12603.077: 63.3261% ( 73) 00:08:24.125 12603.077 - 12653.489: 64.1343% ( 90) 00:08:24.125 12653.489 - 12703.902: 65.2838% ( 128) 00:08:24.125 12703.902 - 12754.314: 66.2087% ( 103) 00:08:24.125 12754.314 - 12804.726: 67.0797% ( 97) 00:08:24.125 12804.726 - 12855.138: 67.8879% ( 90) 00:08:24.125 12855.138 - 12905.551: 69.0374% ( 128) 00:08:24.125 12905.551 - 13006.375: 71.4350% ( 267) 00:08:24.125 13006.375 - 13107.200: 73.0873% ( 184) 00:08:24.125 13107.200 - 13208.025: 74.5690% ( 165) 00:08:24.125 13208.025 - 13308.849: 76.0776% ( 168) 00:08:24.125 13308.849 - 13409.674: 77.1642% ( 121) 00:08:24.125 13409.674 - 13510.498: 78.3495% ( 132) 00:08:24.125 13510.498 - 13611.323: 79.5169% ( 130) 00:08:24.125 13611.323 - 13712.148: 80.6843% ( 130) 00:08:24.125 13712.148 - 13812.972: 81.9235% ( 138) 00:08:24.125 13812.972 - 13913.797: 82.8305% ( 101) 00:08:24.125 13913.797 - 14014.622: 83.5489% ( 80) 00:08:24.125 14014.622 - 14115.446: 84.0068% ( 51) 00:08:24.125 14115.446 - 14216.271: 84.5007% ( 55) 00:08:24.125 14216.271 - 14317.095: 85.1652% ( 74) 00:08:24.125 14317.095 - 14417.920: 85.8477% ( 76) 00:08:24.125 14417.920 - 14518.745: 86.5032% ( 73) 00:08:24.125 14518.745 - 14619.569: 87.2306% ( 81) 00:08:24.125 14619.569 - 14720.394: 87.7874% ( 62) 00:08:24.125 14720.394 - 14821.218: 88.2364% ( 50) 00:08:24.125 14821.218 - 14922.043: 88.7572% ( 58) 00:08:24.125 14922.043 - 15022.868: 89.1792% ( 47) 00:08:24.125 15022.868 - 15123.692: 89.5654% ( 43) 00:08:24.125 15123.692 - 15224.517: 89.8168% ( 28) 00:08:24.125 15224.517 - 15325.342: 90.1221% ( 34) 00:08:24.125 15325.342 - 15426.166: 90.5352% ( 46) 00:08:24.125 15426.166 - 15526.991: 91.0560% ( 58) 00:08:24.125 15526.991 - 15627.815: 91.7834% ( 81) 00:08:24.125 15627.815 - 15728.640: 92.4749% ( 77) 00:08:24.125 15728.640 - 15829.465: 93.1034% ( 70) 00:08:24.125 15829.465 - 15930.289: 93.5524% ( 50) 00:08:24.125 15930.289 - 16031.114: 94.0553% ( 56) 00:08:24.125 16031.114 - 16131.938: 94.5402% ( 54) 00:08:24.125 16131.938 - 16232.763: 95.1239% ( 65) 00:08:24.125 16232.763 - 16333.588: 95.8693% ( 83) 00:08:24.125 16333.588 - 16434.412: 96.3991% ( 59) 00:08:24.125 16434.412 - 16535.237: 97.0456% ( 72) 00:08:24.125 16535.237 - 16636.062: 97.7101% ( 74) 00:08:24.125 16636.062 - 16736.886: 98.1412% ( 48) 00:08:24.125 16736.886 - 16837.711: 98.4106% ( 30) 00:08:24.125 16837.711 - 16938.535: 98.6081% ( 22) 00:08:24.125 16938.535 - 17039.360: 98.7428% ( 15) 00:08:24.125 17039.360 - 17140.185: 98.8236% ( 9) 00:08:24.125 17140.185 - 17241.009: 98.8506% ( 3) 00:08:24.125 19761.625 - 19862.449: 98.8865% ( 4) 00:08:24.125 19862.449 - 19963.274: 98.9404% ( 6) 00:08:24.125 19963.274 - 20064.098: 98.9853% ( 5) 00:08:24.125 20064.098 - 20164.923: 99.0392% ( 6) 00:08:24.125 20164.923 - 20265.748: 99.1020% ( 7) 00:08:24.125 20265.748 - 20366.572: 99.1649% ( 7) 00:08:24.125 20366.572 - 20467.397: 99.2188% ( 6) 00:08:24.125 20467.397 - 20568.222: 99.2726% ( 6) 00:08:24.125 20568.222 - 20669.046: 99.3355% ( 7) 00:08:24.125 20669.046 - 20769.871: 99.3983% ( 7) 00:08:24.125 20769.871 - 20870.695: 99.4253% ( 3) 00:08:24.125 25206.154 - 25306.978: 99.4522% ( 3) 00:08:24.125 25306.978 - 25407.803: 99.5061% ( 6) 00:08:24.125 25407.803 - 25508.628: 99.5600% ( 6) 00:08:24.125 25508.628 - 25609.452: 99.6049% ( 5) 00:08:24.125 25609.452 - 25710.277: 99.6588% ( 6) 00:08:24.125 25710.277 - 25811.102: 99.6947% ( 4) 00:08:24.125 25811.102 - 26012.751: 99.7845% ( 10) 00:08:24.125 26012.751 - 26214.400: 99.8833% ( 11) 00:08:24.125 26214.400 - 26416.049: 99.9910% ( 12) 00:08:24.125 26416.049 - 26617.698: 100.0000% ( 1) 00:08:24.125 00:08:24.125 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:24.125 ============================================================================== 00:08:24.125 Range in us Cumulative IO count 00:08:24.125 4637.932 - 4663.138: 0.0180% ( 2) 00:08:24.125 4663.138 - 4688.345: 0.0449% ( 3) 00:08:24.125 4688.345 - 4713.551: 0.0808% ( 4) 00:08:24.125 4713.551 - 4738.757: 0.1167% ( 4) 00:08:24.125 4738.757 - 4763.963: 0.1527% ( 4) 00:08:24.125 4763.963 - 4789.169: 0.1976% ( 5) 00:08:24.125 4789.169 - 4814.375: 0.2245% ( 3) 00:08:24.125 4814.375 - 4839.582: 0.2425% ( 2) 00:08:24.125 4839.582 - 4864.788: 0.2604% ( 2) 00:08:24.125 4864.788 - 4889.994: 0.2784% ( 2) 00:08:24.125 4889.994 - 4915.200: 0.2963% ( 2) 00:08:24.125 4915.200 - 4940.406: 0.3143% ( 2) 00:08:24.125 4940.406 - 4965.612: 0.3323% ( 2) 00:08:24.125 4965.612 - 4990.818: 0.3592% ( 3) 00:08:24.125 4990.818 - 5016.025: 0.3682% ( 1) 00:08:24.125 5016.025 - 5041.231: 0.3861% ( 2) 00:08:24.125 5041.231 - 5066.437: 0.4131% ( 3) 00:08:24.125 5066.437 - 5091.643: 0.4310% ( 2) 00:08:24.125 5091.643 - 5116.849: 0.4400% ( 1) 00:08:24.125 5116.849 - 5142.055: 0.4580% ( 2) 00:08:24.125 5142.055 - 5167.262: 0.4849% ( 3) 00:08:24.125 5167.262 - 5192.468: 0.5029% ( 2) 00:08:24.125 5192.468 - 5217.674: 0.5208% ( 2) 00:08:24.125 5217.674 - 5242.880: 0.5388% ( 2) 00:08:24.125 5242.880 - 5268.086: 0.5657% ( 3) 00:08:24.125 5268.086 - 5293.292: 0.5747% ( 1) 00:08:24.125 5444.529 - 5469.735: 0.5927% ( 2) 00:08:24.125 5520.148 - 5545.354: 0.6376% ( 5) 00:08:24.125 5545.354 - 5570.560: 0.7364% ( 11) 00:08:24.125 5570.560 - 5595.766: 0.8441% ( 12) 00:08:24.125 5595.766 - 5620.972: 0.9339% ( 10) 00:08:24.125 5620.972 - 5646.178: 1.0596% ( 14) 00:08:24.125 5646.178 - 5671.385: 1.2213% ( 18) 00:08:24.125 5671.385 - 5696.591: 1.5176% ( 33) 00:08:24.125 5696.591 - 5721.797: 1.8409% ( 36) 00:08:24.125 5721.797 - 5747.003: 1.9846% ( 16) 00:08:24.125 5747.003 - 5772.209: 2.1462% ( 18) 00:08:24.125 5772.209 - 5797.415: 2.2899% ( 16) 00:08:24.125 5797.415 - 5822.622: 2.4605% ( 19) 00:08:24.125 5822.622 - 5847.828: 2.6221% ( 18) 00:08:24.125 5847.828 - 5873.034: 2.8376% ( 24) 00:08:24.125 5873.034 - 5898.240: 2.9634% ( 14) 00:08:24.125 5898.240 - 5923.446: 3.2238% ( 29) 00:08:24.125 5923.446 - 5948.652: 3.3136% ( 10) 00:08:24.125 5948.652 - 5973.858: 3.4303% ( 13) 00:08:24.125 5973.858 - 5999.065: 3.6458% ( 24) 00:08:24.125 5999.065 - 6024.271: 3.9422% ( 33) 00:08:24.125 6024.271 - 6049.477: 4.1936% ( 28) 00:08:24.125 6049.477 - 6074.683: 4.4181% ( 25) 00:08:24.125 6074.683 - 6099.889: 4.6246% ( 23) 00:08:24.125 6099.889 - 6125.095: 5.0287% ( 45) 00:08:24.125 6125.095 - 6150.302: 5.5406% ( 57) 00:08:24.125 6150.302 - 6175.508: 5.9177% ( 42) 00:08:24.125 6175.508 - 6200.714: 6.1422% ( 25) 00:08:24.125 6200.714 - 6225.920: 6.4027% ( 29) 00:08:24.125 6225.920 - 6251.126: 6.9325% ( 59) 00:08:24.125 6251.126 - 6276.332: 7.3096% ( 42) 00:08:24.125 6276.332 - 6301.538: 7.5341% ( 25) 00:08:24.125 6301.538 - 6326.745: 7.8215% ( 32) 00:08:24.125 6326.745 - 6351.951: 8.1537% ( 37) 00:08:24.125 6351.951 - 6377.157: 8.4950% ( 38) 00:08:24.125 6377.157 - 6402.363: 8.6656% ( 19) 00:08:24.125 6402.363 - 6427.569: 9.0517% ( 43) 00:08:24.125 6427.569 - 6452.775: 9.3391% ( 32) 00:08:24.125 6452.775 - 6503.188: 9.5815% ( 27) 00:08:24.125 6503.188 - 6553.600: 9.8420% ( 29) 00:08:24.125 6553.600 - 6604.012: 10.0036% ( 18) 00:08:24.125 6604.012 - 6654.425: 10.1562% ( 17) 00:08:24.125 6654.425 - 6704.837: 10.5783% ( 47) 00:08:24.125 6704.837 - 6755.249: 10.6950% ( 13) 00:08:24.125 6755.249 - 6805.662: 10.8118% ( 13) 00:08:24.125 6805.662 - 6856.074: 10.8746% ( 7) 00:08:24.125 6856.074 - 6906.486: 10.9016% ( 3) 00:08:24.125 6906.486 - 6956.898: 10.9195% ( 2) 00:08:24.125 6956.898 - 7007.311: 10.9375% ( 2) 00:08:24.125 7007.311 - 7057.723: 11.0183% ( 9) 00:08:24.125 7057.723 - 7108.135: 11.0991% ( 9) 00:08:24.125 7108.135 - 7158.548: 11.3775% ( 31) 00:08:24.125 7158.548 - 7208.960: 11.4853% ( 12) 00:08:24.125 7208.960 - 7259.372: 11.5841% ( 11) 00:08:24.125 7259.372 - 7309.785: 11.6828% ( 11) 00:08:24.125 7309.785 - 7360.197: 11.9073% ( 25) 00:08:24.125 7360.197 - 7410.609: 11.9612% ( 6) 00:08:24.125 7410.609 - 7461.022: 12.0600% ( 11) 00:08:24.125 7461.022 - 7511.434: 12.1947% ( 15) 00:08:24.125 7511.434 - 7561.846: 12.3473% ( 17) 00:08:24.125 7561.846 - 7612.258: 12.5000% ( 17) 00:08:24.125 7612.258 - 7662.671: 12.5629% ( 7) 00:08:24.125 7662.671 - 7713.083: 12.6078% ( 5) 00:08:24.125 7713.083 - 7763.495: 12.6437% ( 4) 00:08:24.125 7763.495 - 7813.908: 12.6527% ( 1) 00:08:24.125 7864.320 - 7914.732: 12.6616% ( 1) 00:08:24.125 7914.732 - 7965.145: 12.6796% ( 2) 00:08:24.125 7965.145 - 8015.557: 12.7514% ( 8) 00:08:24.125 8015.557 - 8065.969: 12.9131% ( 18) 00:08:24.125 8065.969 - 8116.382: 13.2184% ( 34) 00:08:24.126 8116.382 - 8166.794: 13.4070% ( 21) 00:08:24.126 8166.794 - 8217.206: 13.6404% ( 26) 00:08:24.126 8217.206 - 8267.618: 13.7751% ( 15) 00:08:24.126 8267.618 - 8318.031: 13.9098% ( 15) 00:08:24.126 8318.031 - 8368.443: 14.0715% ( 18) 00:08:24.126 8368.443 - 8418.855: 14.2780% ( 23) 00:08:24.126 8418.855 - 8469.268: 14.6103% ( 37) 00:08:24.126 8469.268 - 8519.680: 14.8078% ( 22) 00:08:24.126 8519.680 - 8570.092: 15.0054% ( 22) 00:08:24.126 8570.092 - 8620.505: 15.2389% ( 26) 00:08:24.126 8620.505 - 8670.917: 15.3915% ( 17) 00:08:24.126 8670.917 - 8721.329: 15.5621% ( 19) 00:08:24.126 8721.329 - 8771.742: 15.6699% ( 12) 00:08:24.126 8771.742 - 8822.154: 15.8675% ( 22) 00:08:24.126 8822.154 - 8872.566: 16.1458% ( 31) 00:08:24.126 8872.566 - 8922.978: 16.3973% ( 28) 00:08:24.126 8922.978 - 8973.391: 16.8463% ( 50) 00:08:24.126 8973.391 - 9023.803: 17.1336% ( 32) 00:08:24.126 9023.803 - 9074.215: 17.4210% ( 32) 00:08:24.126 9074.215 - 9124.628: 17.8251% ( 45) 00:08:24.126 9124.628 - 9175.040: 18.1753% ( 39) 00:08:24.126 9175.040 - 9225.452: 18.4986% ( 36) 00:08:24.126 9225.452 - 9275.865: 18.7949% ( 33) 00:08:24.126 9275.865 - 9326.277: 19.0553% ( 29) 00:08:24.126 9326.277 - 9376.689: 19.4864% ( 48) 00:08:24.126 9376.689 - 9427.102: 20.0162% ( 59) 00:08:24.126 9427.102 - 9477.514: 20.4741% ( 51) 00:08:24.126 9477.514 - 9527.926: 20.7525% ( 31) 00:08:24.126 9527.926 - 9578.338: 20.9591% ( 23) 00:08:24.126 9578.338 - 9628.751: 21.1476% ( 21) 00:08:24.126 9628.751 - 9679.163: 21.4080% ( 29) 00:08:24.126 9679.163 - 9729.575: 21.7852% ( 42) 00:08:24.126 9729.575 - 9779.988: 22.3420% ( 62) 00:08:24.126 9779.988 - 9830.400: 22.9795% ( 71) 00:08:24.126 9830.400 - 9880.812: 23.8416% ( 96) 00:08:24.126 9880.812 - 9931.225: 24.8743% ( 115) 00:08:24.126 9931.225 - 9981.637: 25.6017% ( 81) 00:08:24.126 9981.637 - 10032.049: 26.4278% ( 92) 00:08:24.126 10032.049 - 10082.462: 27.2989% ( 97) 00:08:24.126 10082.462 - 10132.874: 28.0262% ( 81) 00:08:24.126 10132.874 - 10183.286: 28.8973% ( 97) 00:08:24.126 10183.286 - 10233.698: 29.5528% ( 73) 00:08:24.126 10233.698 - 10284.111: 30.4688% ( 102) 00:08:24.126 10284.111 - 10334.523: 31.2500% ( 87) 00:08:24.126 10334.523 - 10384.935: 32.0851% ( 93) 00:08:24.126 10384.935 - 10435.348: 33.1717% ( 121) 00:08:24.126 10435.348 - 10485.760: 34.1954% ( 114) 00:08:24.126 10485.760 - 10536.172: 35.1922% ( 111) 00:08:24.126 10536.172 - 10586.585: 36.1351% ( 105) 00:08:24.126 10586.585 - 10636.997: 37.2665% ( 126) 00:08:24.126 10636.997 - 10687.409: 38.2812% ( 113) 00:08:24.126 10687.409 - 10737.822: 39.4307% ( 128) 00:08:24.126 10737.822 - 10788.234: 40.4634% ( 115) 00:08:24.126 10788.234 - 10838.646: 41.5409% ( 120) 00:08:24.126 10838.646 - 10889.058: 42.5557% ( 113) 00:08:24.126 10889.058 - 10939.471: 43.5255% ( 108) 00:08:24.126 10939.471 - 10989.883: 44.3157% ( 88) 00:08:24.126 10989.883 - 11040.295: 45.0341% ( 80) 00:08:24.126 11040.295 - 11090.708: 45.6627% ( 70) 00:08:24.126 11090.708 - 11141.120: 46.2464% ( 65) 00:08:24.126 11141.120 - 11191.532: 46.7583% ( 57) 00:08:24.126 11191.532 - 11241.945: 47.2701% ( 57) 00:08:24.126 11241.945 - 11292.357: 47.7191% ( 50) 00:08:24.126 11292.357 - 11342.769: 48.2040% ( 54) 00:08:24.126 11342.769 - 11393.182: 48.7518% ( 61) 00:08:24.126 11393.182 - 11443.594: 49.3085% ( 62) 00:08:24.126 11443.594 - 11494.006: 49.8024% ( 55) 00:08:24.126 11494.006 - 11544.418: 50.5388% ( 82) 00:08:24.126 11544.418 - 11594.831: 51.0417% ( 56) 00:08:24.126 11594.831 - 11645.243: 51.6523% ( 68) 00:08:24.126 11645.243 - 11695.655: 52.3078% ( 73) 00:08:24.126 11695.655 - 11746.068: 52.9095% ( 67) 00:08:24.126 11746.068 - 11796.480: 53.4034% ( 55) 00:08:24.126 11796.480 - 11846.892: 54.0499% ( 72) 00:08:24.126 11846.892 - 11897.305: 54.7953% ( 83) 00:08:24.126 11897.305 - 11947.717: 55.3700% ( 64) 00:08:24.126 11947.717 - 11998.129: 55.8639% ( 55) 00:08:24.126 11998.129 - 12048.542: 56.1871% ( 36) 00:08:24.126 12048.542 - 12098.954: 56.5374% ( 39) 00:08:24.126 12098.954 - 12149.366: 56.9684% ( 48) 00:08:24.126 12149.366 - 12199.778: 57.3994% ( 48) 00:08:24.126 12199.778 - 12250.191: 58.0280% ( 70) 00:08:24.126 12250.191 - 12300.603: 58.8182% ( 88) 00:08:24.126 12300.603 - 12351.015: 59.7162% ( 100) 00:08:24.126 12351.015 - 12401.428: 60.4077% ( 77) 00:08:24.126 12401.428 - 12451.840: 60.9734% ( 63) 00:08:24.126 12451.840 - 12502.252: 61.4943% ( 58) 00:08:24.126 12502.252 - 12552.665: 62.0600% ( 63) 00:08:24.126 12552.665 - 12603.077: 62.6527% ( 66) 00:08:24.126 12603.077 - 12653.489: 63.4429% ( 88) 00:08:24.126 12653.489 - 12703.902: 64.4127% ( 108) 00:08:24.126 12703.902 - 12754.314: 65.3556% ( 105) 00:08:24.126 12754.314 - 12804.726: 66.3973% ( 116) 00:08:24.126 12804.726 - 12855.138: 67.3851% ( 110) 00:08:24.126 12855.138 - 12905.551: 68.2830% ( 100) 00:08:24.126 12905.551 - 13006.375: 70.3754% ( 233) 00:08:24.126 13006.375 - 13107.200: 72.1534% ( 198) 00:08:24.126 13107.200 - 13208.025: 74.2726% ( 236) 00:08:24.126 13208.025 - 13308.849: 75.8621% ( 177) 00:08:24.126 13308.849 - 13409.674: 77.1821% ( 147) 00:08:24.126 13409.674 - 13510.498: 78.5740% ( 155) 00:08:24.126 13510.498 - 13611.323: 79.9300% ( 151) 00:08:24.126 13611.323 - 13712.148: 81.4835% ( 173) 00:08:24.126 13712.148 - 13812.972: 82.6509% ( 130) 00:08:24.126 13812.972 - 13913.797: 83.4052% ( 84) 00:08:24.126 13913.797 - 14014.622: 83.9799% ( 64) 00:08:24.126 14014.622 - 14115.446: 84.4648% ( 54) 00:08:24.126 14115.446 - 14216.271: 84.8958% ( 48) 00:08:24.126 14216.271 - 14317.095: 85.3448% ( 50) 00:08:24.126 14317.095 - 14417.920: 85.8387% ( 55) 00:08:24.126 14417.920 - 14518.745: 86.1171% ( 31) 00:08:24.126 14518.745 - 14619.569: 86.3865% ( 30) 00:08:24.126 14619.569 - 14720.394: 86.6290% ( 27) 00:08:24.126 14720.394 - 14821.218: 86.9792% ( 39) 00:08:24.126 14821.218 - 14922.043: 87.6527% ( 75) 00:08:24.126 14922.043 - 15022.868: 88.4249% ( 86) 00:08:24.126 15022.868 - 15123.692: 89.0805% ( 73) 00:08:24.126 15123.692 - 15224.517: 89.7270% ( 72) 00:08:24.126 15224.517 - 15325.342: 90.6340% ( 101) 00:08:24.126 15325.342 - 15426.166: 91.5409% ( 101) 00:08:24.126 15426.166 - 15526.991: 92.2504% ( 79) 00:08:24.126 15526.991 - 15627.815: 92.9149% ( 74) 00:08:24.126 15627.815 - 15728.640: 93.5345% ( 69) 00:08:24.126 15728.640 - 15829.465: 94.0463% ( 57) 00:08:24.126 15829.465 - 15930.289: 94.5492% ( 56) 00:08:24.126 15930.289 - 16031.114: 95.1598% ( 68) 00:08:24.126 16031.114 - 16131.938: 95.5639% ( 45) 00:08:24.126 16131.938 - 16232.763: 95.9411% ( 42) 00:08:24.126 16232.763 - 16333.588: 96.3452% ( 45) 00:08:24.126 16333.588 - 16434.412: 96.9199% ( 64) 00:08:24.126 16434.412 - 16535.237: 97.3689% ( 50) 00:08:24.126 16535.237 - 16636.062: 97.8089% ( 49) 00:08:24.126 16636.062 - 16736.886: 98.1501% ( 38) 00:08:24.126 16736.886 - 16837.711: 98.4555% ( 34) 00:08:24.126 16837.711 - 16938.535: 98.6261% ( 19) 00:08:24.126 16938.535 - 17039.360: 98.7518% ( 14) 00:08:24.126 17039.360 - 17140.185: 98.8147% ( 7) 00:08:24.126 17140.185 - 17241.009: 98.8506% ( 4) 00:08:24.126 19761.625 - 19862.449: 98.9134% ( 7) 00:08:24.126 19862.449 - 19963.274: 98.9583% ( 5) 00:08:24.126 19963.274 - 20064.098: 99.0212% ( 7) 00:08:24.126 20064.098 - 20164.923: 99.0751% ( 6) 00:08:24.126 20164.923 - 20265.748: 99.1379% ( 7) 00:08:24.126 20265.748 - 20366.572: 99.2008% ( 7) 00:08:24.126 20366.572 - 20467.397: 99.2457% ( 5) 00:08:24.126 20467.397 - 20568.222: 99.2996% ( 6) 00:08:24.126 20568.222 - 20669.046: 99.3624% ( 7) 00:08:24.126 20669.046 - 20769.871: 99.4163% ( 6) 00:08:24.126 20769.871 - 20870.695: 99.4253% ( 1) 00:08:24.126 25105.329 - 25206.154: 99.4432% ( 2) 00:08:24.126 25206.154 - 25306.978: 99.4971% ( 6) 00:08:24.126 25306.978 - 25407.803: 99.5510% ( 6) 00:08:24.126 25407.803 - 25508.628: 99.5959% ( 5) 00:08:24.126 25508.628 - 25609.452: 99.6408% ( 5) 00:08:24.126 25609.452 - 25710.277: 99.6857% ( 5) 00:08:24.126 25710.277 - 25811.102: 99.7306% ( 5) 00:08:24.126 25811.102 - 26012.751: 99.8384% ( 12) 00:08:24.126 26012.751 - 26214.400: 99.9371% ( 11) 00:08:24.126 26214.400 - 26416.049: 100.0000% ( 7) 00:08:24.126 00:08:24.126 10:14:03 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:24.126 00:08:24.126 real 0m2.415s 00:08:24.126 user 0m2.141s 00:08:24.126 sys 0m0.166s 00:08:24.126 10:14:03 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.126 ************************************ 00:08:24.126 END TEST nvme_perf 00:08:24.126 ************************************ 00:08:24.126 10:14:03 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:24.126 10:14:03 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:24.126 10:14:03 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:24.126 10:14:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.126 10:14:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.126 ************************************ 00:08:24.126 START TEST nvme_hello_world 00:08:24.126 ************************************ 00:08:24.126 10:14:03 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:24.126 Initializing NVMe Controllers 00:08:24.126 Attached to 0000:00:13.0 00:08:24.126 Namespace ID: 1 size: 1GB 00:08:24.126 Attached to 0000:00:10.0 00:08:24.126 Namespace ID: 1 size: 6GB 00:08:24.126 Attached to 0000:00:11.0 00:08:24.126 Namespace ID: 1 size: 5GB 00:08:24.126 Attached to 0000:00:12.0 00:08:24.127 Namespace ID: 1 size: 4GB 00:08:24.127 Namespace ID: 2 size: 4GB 00:08:24.127 Namespace ID: 3 size: 4GB 00:08:24.127 Initialization complete. 00:08:24.127 INFO: using host memory buffer for IO 00:08:24.127 Hello world! 00:08:24.127 INFO: using host memory buffer for IO 00:08:24.127 Hello world! 00:08:24.127 INFO: using host memory buffer for IO 00:08:24.127 Hello world! 00:08:24.127 INFO: using host memory buffer for IO 00:08:24.127 Hello world! 00:08:24.127 INFO: using host memory buffer for IO 00:08:24.127 Hello world! 00:08:24.127 INFO: using host memory buffer for IO 00:08:24.127 Hello world! 00:08:24.127 00:08:24.127 real 0m0.186s 00:08:24.127 user 0m0.068s 00:08:24.127 sys 0m0.076s 00:08:24.127 10:14:03 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.127 10:14:03 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:24.127 ************************************ 00:08:24.127 END TEST nvme_hello_world 00:08:24.127 ************************************ 00:08:24.127 10:14:03 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:24.127 10:14:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.127 10:14:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.127 10:14:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.127 ************************************ 00:08:24.127 START TEST nvme_sgl 00:08:24.127 ************************************ 00:08:24.127 10:14:03 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:24.388 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:24.388 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:24.388 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:24.388 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:24.388 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:24.388 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:24.388 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:24.388 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:24.388 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:24.388 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:24.388 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:24.388 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:24.388 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:24.388 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:24.388 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:24.388 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:24.388 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:24.388 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:24.388 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:24.388 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:24.388 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:24.388 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:24.388 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:24.388 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:24.388 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:24.388 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:24.388 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:24.388 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:24.388 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:24.388 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:24.388 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:24.388 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:24.388 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:24.388 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:24.388 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:24.388 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:24.388 NVMe Readv/Writev Request test 00:08:24.388 Attached to 0000:00:13.0 00:08:24.388 Attached to 0000:00:10.0 00:08:24.388 Attached to 0000:00:11.0 00:08:24.388 Attached to 0000:00:12.0 00:08:24.388 0000:00:10.0: build_io_request_2 test passed 00:08:24.388 0000:00:10.0: build_io_request_4 test passed 00:08:24.388 0000:00:10.0: build_io_request_5 test passed 00:08:24.388 0000:00:10.0: build_io_request_6 test passed 00:08:24.388 0000:00:10.0: build_io_request_7 test passed 00:08:24.388 0000:00:10.0: build_io_request_10 test passed 00:08:24.388 0000:00:11.0: build_io_request_2 test passed 00:08:24.388 0000:00:11.0: build_io_request_4 test passed 00:08:24.388 0000:00:11.0: build_io_request_5 test passed 00:08:24.388 0000:00:11.0: build_io_request_6 test passed 00:08:24.388 0000:00:11.0: build_io_request_7 test passed 00:08:24.388 0000:00:11.0: build_io_request_10 test passed 00:08:24.388 Cleaning up... 00:08:24.388 00:08:24.388 real 0m0.246s 00:08:24.388 user 0m0.130s 00:08:24.388 sys 0m0.071s 00:08:24.388 10:14:03 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.388 10:14:03 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:24.388 ************************************ 00:08:24.388 END TEST nvme_sgl 00:08:24.388 ************************************ 00:08:24.388 10:14:03 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:24.388 10:14:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.388 10:14:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.388 10:14:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.388 ************************************ 00:08:24.388 START TEST nvme_e2edp 00:08:24.388 ************************************ 00:08:24.388 10:14:03 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:24.649 NVMe Write/Read with End-to-End data protection test 00:08:24.649 Attached to 0000:00:13.0 00:08:24.649 Attached to 0000:00:10.0 00:08:24.649 Attached to 0000:00:11.0 00:08:24.649 Attached to 0000:00:12.0 00:08:24.649 Cleaning up... 00:08:24.649 00:08:24.649 real 0m0.181s 00:08:24.649 user 0m0.063s 00:08:24.649 sys 0m0.072s 00:08:24.649 10:14:03 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.649 10:14:03 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:24.649 ************************************ 00:08:24.649 END TEST nvme_e2edp 00:08:24.649 ************************************ 00:08:24.649 10:14:04 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:24.649 10:14:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.649 10:14:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.649 10:14:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.649 ************************************ 00:08:24.649 START TEST nvme_reserve 00:08:24.649 ************************************ 00:08:24.649 10:14:04 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:24.911 ===================================================== 00:08:24.911 NVMe Controller at PCI bus 0, device 19, function 0 00:08:24.911 ===================================================== 00:08:24.911 Reservations: Not Supported 00:08:24.911 ===================================================== 00:08:24.911 NVMe Controller at PCI bus 0, device 16, function 0 00:08:24.911 ===================================================== 00:08:24.911 Reservations: Not Supported 00:08:24.911 ===================================================== 00:08:24.911 NVMe Controller at PCI bus 0, device 17, function 0 00:08:24.911 ===================================================== 00:08:24.911 Reservations: Not Supported 00:08:24.911 ===================================================== 00:08:24.911 NVMe Controller at PCI bus 0, device 18, function 0 00:08:24.911 ===================================================== 00:08:24.911 Reservations: Not Supported 00:08:24.911 Reservation test passed 00:08:24.911 ************************************ 00:08:24.911 END TEST nvme_reserve 00:08:24.911 ************************************ 00:08:24.911 00:08:24.911 real 0m0.179s 00:08:24.911 user 0m0.055s 00:08:24.911 sys 0m0.082s 00:08:24.911 10:14:04 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.911 10:14:04 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:24.911 10:14:04 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:24.911 10:14:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.911 10:14:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.911 10:14:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.911 ************************************ 00:08:24.911 START TEST nvme_err_injection 00:08:24.911 ************************************ 00:08:24.911 10:14:04 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:25.171 NVMe Error Injection test 00:08:25.171 Attached to 0000:00:13.0 00:08:25.171 Attached to 0000:00:10.0 00:08:25.171 Attached to 0000:00:11.0 00:08:25.171 Attached to 0000:00:12.0 00:08:25.171 0000:00:11.0: get features failed as expected 00:08:25.171 0000:00:12.0: get features failed as expected 00:08:25.171 0000:00:13.0: get features failed as expected 00:08:25.171 0000:00:10.0: get features failed as expected 00:08:25.171 0000:00:13.0: get features successfully as expected 00:08:25.171 0000:00:10.0: get features successfully as expected 00:08:25.171 0000:00:11.0: get features successfully as expected 00:08:25.171 0000:00:12.0: get features successfully as expected 00:08:25.171 0000:00:12.0: read failed as expected 00:08:25.171 0000:00:13.0: read failed as expected 00:08:25.171 0000:00:10.0: read failed as expected 00:08:25.171 0000:00:11.0: read failed as expected 00:08:25.171 0000:00:12.0: read successfully as expected 00:08:25.171 0000:00:13.0: read successfully as expected 00:08:25.171 0000:00:10.0: read successfully as expected 00:08:25.171 0000:00:11.0: read successfully as expected 00:08:25.171 Cleaning up... 00:08:25.171 00:08:25.171 real 0m0.209s 00:08:25.171 user 0m0.078s 00:08:25.171 sys 0m0.082s 00:08:25.171 10:14:04 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.171 10:14:04 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:25.171 ************************************ 00:08:25.171 END TEST nvme_err_injection 00:08:25.171 ************************************ 00:08:25.171 10:14:04 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:25.171 10:14:04 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:25.171 10:14:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.171 10:14:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.171 ************************************ 00:08:25.171 START TEST nvme_overhead 00:08:25.171 ************************************ 00:08:25.171 10:14:04 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:26.550 Initializing NVMe Controllers 00:08:26.550 Attached to 0000:00:13.0 00:08:26.550 Attached to 0000:00:10.0 00:08:26.550 Attached to 0000:00:11.0 00:08:26.550 Attached to 0000:00:12.0 00:08:26.550 Initialization complete. Launching workers. 00:08:26.550 submit (in ns) avg, min, max = 11561.4, 10379.2, 288401.5 00:08:26.550 complete (in ns) avg, min, max = 7765.3, 7320.0, 812496.9 00:08:26.550 00:08:26.550 Submit histogram 00:08:26.550 ================ 00:08:26.550 Range in us Cumulative Count 00:08:26.550 10.338 - 10.388: 0.0061% ( 1) 00:08:26.550 10.978 - 11.028: 0.0242% ( 3) 00:08:26.550 11.028 - 11.077: 0.2241% ( 33) 00:08:26.550 11.077 - 11.126: 2.4163% ( 362) 00:08:26.550 11.126 - 11.175: 9.9982% ( 1252) 00:08:26.550 11.175 - 11.225: 25.1378% ( 2500) 00:08:26.550 11.225 - 11.274: 43.9654% ( 3109) 00:08:26.550 11.274 - 11.323: 60.1223% ( 2668) 00:08:26.550 11.323 - 11.372: 70.9441% ( 1787) 00:08:26.550 11.372 - 11.422: 77.7327% ( 1121) 00:08:26.550 11.422 - 11.471: 81.7356% ( 661) 00:08:26.550 11.471 - 11.520: 84.1216% ( 394) 00:08:26.550 11.520 - 11.569: 85.4721% ( 223) 00:08:26.550 11.569 - 11.618: 86.3259% ( 141) 00:08:26.550 11.618 - 11.668: 86.9618% ( 105) 00:08:26.550 11.668 - 11.717: 87.4826% ( 86) 00:08:26.550 11.717 - 11.766: 88.0216% ( 89) 00:08:26.550 11.766 - 11.815: 88.5545% ( 88) 00:08:26.551 11.815 - 11.865: 89.0389% ( 80) 00:08:26.551 11.865 - 11.914: 89.4326% ( 65) 00:08:26.551 11.914 - 11.963: 89.8625% ( 71) 00:08:26.551 11.963 - 12.012: 90.3288% ( 77) 00:08:26.551 12.012 - 12.062: 90.7770% ( 74) 00:08:26.551 12.062 - 12.111: 91.2675% ( 81) 00:08:26.551 12.111 - 12.160: 91.8852% ( 102) 00:08:26.551 12.160 - 12.209: 92.4908% ( 100) 00:08:26.551 12.209 - 12.258: 93.1327% ( 106) 00:08:26.551 12.258 - 12.308: 93.7322% ( 99) 00:08:26.551 12.308 - 12.357: 94.2954% ( 93) 00:08:26.551 12.357 - 12.406: 94.7920% ( 82) 00:08:26.551 12.406 - 12.455: 95.2341% ( 73) 00:08:26.551 12.455 - 12.505: 95.5974% ( 60) 00:08:26.551 12.505 - 12.554: 95.8033% ( 34) 00:08:26.551 12.554 - 12.603: 95.9668% ( 27) 00:08:26.551 12.603 - 12.702: 96.1485% ( 30) 00:08:26.551 12.702 - 12.800: 96.2696% ( 20) 00:08:26.551 12.800 - 12.898: 96.3423% ( 12) 00:08:26.551 12.898 - 12.997: 96.3968% ( 9) 00:08:26.551 12.997 - 13.095: 96.5179% ( 20) 00:08:26.551 13.095 - 13.194: 96.6875% ( 28) 00:08:26.551 13.194 - 13.292: 96.9600% ( 45) 00:08:26.551 13.292 - 13.391: 97.1780% ( 36) 00:08:26.551 13.391 - 13.489: 97.3173% ( 23) 00:08:26.551 13.489 - 13.588: 97.4020% ( 14) 00:08:26.551 13.588 - 13.686: 97.5232% ( 20) 00:08:26.551 13.686 - 13.785: 97.5958% ( 12) 00:08:26.551 13.785 - 13.883: 97.6382% ( 7) 00:08:26.551 13.883 - 13.982: 97.6443% ( 1) 00:08:26.551 13.982 - 14.080: 97.6685% ( 4) 00:08:26.551 14.080 - 14.178: 97.6867% ( 3) 00:08:26.551 14.178 - 14.277: 97.7230% ( 6) 00:08:26.551 14.277 - 14.375: 97.7351% ( 2) 00:08:26.551 14.375 - 14.474: 97.7593% ( 4) 00:08:26.551 14.474 - 14.572: 97.8017% ( 7) 00:08:26.551 14.572 - 14.671: 97.8199% ( 3) 00:08:26.551 14.671 - 14.769: 97.8381% ( 3) 00:08:26.551 14.769 - 14.868: 97.8502% ( 2) 00:08:26.551 14.868 - 14.966: 97.8986% ( 8) 00:08:26.551 14.966 - 15.065: 97.9592% ( 10) 00:08:26.551 15.065 - 15.163: 98.0016% ( 7) 00:08:26.551 15.163 - 15.262: 98.0500% ( 8) 00:08:26.551 15.262 - 15.360: 98.1166% ( 11) 00:08:26.551 15.360 - 15.458: 98.1469% ( 5) 00:08:26.551 15.458 - 15.557: 98.1893% ( 7) 00:08:26.551 15.557 - 15.655: 98.2378% ( 8) 00:08:26.551 15.655 - 15.754: 98.2620% ( 4) 00:08:26.551 15.754 - 15.852: 98.2983% ( 6) 00:08:26.551 15.852 - 15.951: 98.3286% ( 5) 00:08:26.551 15.951 - 16.049: 98.3589% ( 5) 00:08:26.551 16.049 - 16.148: 98.3952% ( 6) 00:08:26.551 16.148 - 16.246: 98.4315% ( 6) 00:08:26.551 16.246 - 16.345: 98.4679% ( 6) 00:08:26.551 16.345 - 16.443: 98.4982% ( 5) 00:08:26.551 16.443 - 16.542: 98.5103% ( 2) 00:08:26.551 16.542 - 16.640: 98.5284% ( 3) 00:08:26.551 16.640 - 16.738: 98.5466% ( 3) 00:08:26.551 16.738 - 16.837: 98.6011% ( 9) 00:08:26.551 16.837 - 16.935: 98.6798% ( 13) 00:08:26.551 16.935 - 17.034: 98.7949% ( 19) 00:08:26.551 17.034 - 17.132: 98.8918% ( 16) 00:08:26.551 17.132 - 17.231: 99.0068% ( 19) 00:08:26.551 17.231 - 17.329: 99.1037% ( 16) 00:08:26.551 17.329 - 17.428: 99.1825% ( 13) 00:08:26.551 17.428 - 17.526: 99.2733% ( 15) 00:08:26.551 17.526 - 17.625: 99.3641% ( 15) 00:08:26.551 17.625 - 17.723: 99.4005% ( 6) 00:08:26.551 17.723 - 17.822: 99.4671% ( 11) 00:08:26.551 17.822 - 17.920: 99.4913% ( 4) 00:08:26.551 17.920 - 18.018: 99.5276% ( 6) 00:08:26.551 18.018 - 18.117: 99.5640% ( 6) 00:08:26.551 18.117 - 18.215: 99.5821% ( 3) 00:08:26.551 18.215 - 18.314: 99.5943% ( 2) 00:08:26.551 18.314 - 18.412: 99.6245% ( 5) 00:08:26.551 18.412 - 18.511: 99.6366% ( 2) 00:08:26.551 18.511 - 18.609: 99.6548% ( 3) 00:08:26.551 18.609 - 18.708: 99.6609% ( 1) 00:08:26.551 18.708 - 18.806: 99.6669% ( 1) 00:08:26.551 18.905 - 19.003: 99.6730% ( 1) 00:08:26.551 19.003 - 19.102: 99.6790% ( 1) 00:08:26.551 19.102 - 19.200: 99.7033% ( 4) 00:08:26.551 19.200 - 19.298: 99.7154% ( 2) 00:08:26.551 19.397 - 19.495: 99.7396% ( 4) 00:08:26.551 19.495 - 19.594: 99.7457% ( 1) 00:08:26.551 19.594 - 19.692: 99.7578% ( 2) 00:08:26.551 19.692 - 19.791: 99.7638% ( 1) 00:08:26.551 19.791 - 19.889: 99.7759% ( 2) 00:08:26.551 19.889 - 19.988: 99.7820% ( 1) 00:08:26.551 20.283 - 20.382: 99.7880% ( 1) 00:08:26.551 20.382 - 20.480: 99.7941% ( 1) 00:08:26.551 20.480 - 20.578: 99.8062% ( 2) 00:08:26.551 21.268 - 21.366: 99.8183% ( 2) 00:08:26.551 21.563 - 21.662: 99.8304% ( 2) 00:08:26.551 21.957 - 22.055: 99.8365% ( 1) 00:08:26.551 22.055 - 22.154: 99.8425% ( 1) 00:08:26.551 22.252 - 22.351: 99.8486% ( 1) 00:08:26.551 22.548 - 22.646: 99.8607% ( 2) 00:08:26.551 22.646 - 22.745: 99.8668% ( 1) 00:08:26.551 22.745 - 22.843: 99.8728% ( 1) 00:08:26.551 22.843 - 22.942: 99.8789% ( 1) 00:08:26.551 23.138 - 23.237: 99.8849% ( 1) 00:08:26.551 23.532 - 23.631: 99.8910% ( 1) 00:08:26.551 23.631 - 23.729: 99.8971% ( 1) 00:08:26.551 24.222 - 24.320: 99.9031% ( 1) 00:08:26.551 26.585 - 26.782: 99.9152% ( 2) 00:08:26.551 27.372 - 27.569: 99.9213% ( 1) 00:08:26.551 28.554 - 28.751: 99.9273% ( 1) 00:08:26.551 29.932 - 30.129: 99.9334% ( 1) 00:08:26.551 30.523 - 30.720: 99.9394% ( 1) 00:08:26.551 31.508 - 31.705: 99.9455% ( 1) 00:08:26.551 32.689 - 32.886: 99.9516% ( 1) 00:08:26.551 33.477 - 33.674: 99.9576% ( 1) 00:08:26.551 34.462 - 34.658: 99.9637% ( 1) 00:08:26.551 40.960 - 41.157: 99.9697% ( 1) 00:08:26.551 44.111 - 44.308: 99.9758% ( 1) 00:08:26.551 44.702 - 44.898: 99.9818% ( 1) 00:08:26.551 55.926 - 56.320: 99.9879% ( 1) 00:08:26.551 68.529 - 68.923: 99.9939% ( 1) 00:08:26.551 288.295 - 289.871: 100.0000% ( 1) 00:08:26.551 00:08:26.551 Complete histogram 00:08:26.551 ================== 00:08:26.551 Range in us Cumulative Count 00:08:26.551 7.286 - 7.335: 0.0303% ( 5) 00:08:26.551 7.335 - 7.385: 0.5571% ( 87) 00:08:26.551 7.385 - 7.434: 4.3723% ( 630) 00:08:26.551 7.434 - 7.483: 17.6830% ( 2198) 00:08:26.551 7.483 - 7.532: 39.0904% ( 3535) 00:08:26.551 7.532 - 7.582: 61.7877% ( 3748) 00:08:26.551 7.582 - 7.631: 77.6600% ( 2621) 00:08:26.551 7.631 - 7.680: 86.7619% ( 1503) 00:08:26.551 7.680 - 7.729: 91.4431% ( 773) 00:08:26.551 7.729 - 7.778: 94.5255% ( 509) 00:08:26.551 7.778 - 7.828: 96.1788% ( 273) 00:08:26.551 7.828 - 7.877: 97.0932% ( 151) 00:08:26.551 7.877 - 7.926: 97.3839% ( 48) 00:08:26.551 7.926 - 7.975: 97.5232% ( 23) 00:08:26.551 7.975 - 8.025: 97.6443% ( 20) 00:08:26.551 8.025 - 8.074: 97.6927% ( 8) 00:08:26.551 8.074 - 8.123: 97.7351% ( 7) 00:08:26.551 8.123 - 8.172: 97.7715% ( 6) 00:08:26.551 8.172 - 8.222: 97.8138% ( 7) 00:08:26.551 8.222 - 8.271: 97.8562% ( 7) 00:08:26.551 8.271 - 8.320: 97.8865% ( 5) 00:08:26.551 8.320 - 8.369: 97.9410% ( 9) 00:08:26.551 8.369 - 8.418: 97.9531% ( 2) 00:08:26.551 8.418 - 8.468: 97.9834% ( 5) 00:08:26.551 8.468 - 8.517: 98.0258% ( 7) 00:08:26.551 8.517 - 8.566: 98.0561% ( 5) 00:08:26.551 8.566 - 8.615: 98.0803% ( 4) 00:08:26.551 8.615 - 8.665: 98.0985% ( 3) 00:08:26.551 8.665 - 8.714: 98.1409% ( 7) 00:08:26.551 8.714 - 8.763: 98.1711% ( 5) 00:08:26.551 8.763 - 8.812: 98.2014% ( 5) 00:08:26.551 8.812 - 8.862: 98.2196% ( 3) 00:08:26.551 8.862 - 8.911: 98.2317% ( 2) 00:08:26.551 8.911 - 8.960: 98.2378% ( 1) 00:08:26.551 8.960 - 9.009: 98.2559% ( 3) 00:08:26.551 9.009 - 9.058: 98.2801% ( 4) 00:08:26.551 9.108 - 9.157: 98.2983% ( 3) 00:08:26.551 9.255 - 9.305: 98.3104% ( 2) 00:08:26.551 9.354 - 9.403: 98.3165% ( 1) 00:08:26.551 9.452 - 9.502: 98.3225% ( 1) 00:08:26.551 9.551 - 9.600: 98.3286% ( 1) 00:08:26.551 9.649 - 9.698: 98.3407% ( 2) 00:08:26.551 9.748 - 9.797: 98.3528% ( 2) 00:08:26.551 9.895 - 9.945: 98.3589% ( 1) 00:08:26.551 9.945 - 9.994: 98.3649% ( 1) 00:08:26.551 9.994 - 10.043: 98.3710% ( 1) 00:08:26.551 10.142 - 10.191: 98.3831% ( 2) 00:08:26.551 10.240 - 10.289: 98.3891% ( 1) 00:08:26.551 10.289 - 10.338: 98.3952% ( 1) 00:08:26.551 10.338 - 10.388: 98.4013% ( 1) 00:08:26.551 10.437 - 10.486: 98.4073% ( 1) 00:08:26.551 10.486 - 10.535: 98.4134% ( 1) 00:08:26.551 10.535 - 10.585: 98.4194% ( 1) 00:08:26.551 10.585 - 10.634: 98.4376% ( 3) 00:08:26.551 10.634 - 10.683: 98.4437% ( 1) 00:08:26.551 10.683 - 10.732: 98.4739% ( 5) 00:08:26.551 10.732 - 10.782: 98.4921% ( 3) 00:08:26.551 10.782 - 10.831: 98.4982% ( 1) 00:08:26.551 10.831 - 10.880: 98.5224% ( 4) 00:08:26.551 10.880 - 10.929: 98.5345% ( 2) 00:08:26.551 10.929 - 10.978: 98.5466% ( 2) 00:08:26.551 10.978 - 11.028: 98.5587% ( 2) 00:08:26.551 11.028 - 11.077: 98.5890% ( 5) 00:08:26.551 11.077 - 11.126: 98.6072% ( 3) 00:08:26.551 11.126 - 11.175: 98.6374% ( 5) 00:08:26.551 11.175 - 11.225: 98.6495% ( 2) 00:08:26.552 11.225 - 11.274: 98.6617% ( 2) 00:08:26.552 11.274 - 11.323: 98.6677% ( 1) 00:08:26.552 11.372 - 11.422: 98.6798% ( 2) 00:08:26.552 11.422 - 11.471: 98.6859% ( 1) 00:08:26.552 11.471 - 11.520: 98.6919% ( 1) 00:08:26.552 11.520 - 11.569: 98.6980% ( 1) 00:08:26.552 11.569 - 11.618: 98.7041% ( 1) 00:08:26.552 11.618 - 11.668: 98.7101% ( 1) 00:08:26.552 11.766 - 11.815: 98.7162% ( 1) 00:08:26.552 11.815 - 11.865: 98.7222% ( 1) 00:08:26.552 12.012 - 12.062: 98.7283% ( 1) 00:08:26.552 12.062 - 12.111: 98.7343% ( 1) 00:08:26.552 12.160 - 12.209: 98.7404% ( 1) 00:08:26.552 12.209 - 12.258: 98.7464% ( 1) 00:08:26.552 12.258 - 12.308: 98.7525% ( 1) 00:08:26.552 12.357 - 12.406: 98.7586% ( 1) 00:08:26.552 12.505 - 12.554: 98.7646% ( 1) 00:08:26.552 12.702 - 12.800: 98.7707% ( 1) 00:08:26.552 12.800 - 12.898: 98.7828% ( 2) 00:08:26.552 12.898 - 12.997: 98.7888% ( 1) 00:08:26.552 12.997 - 13.095: 98.8252% ( 6) 00:08:26.552 13.095 - 13.194: 98.8857% ( 10) 00:08:26.552 13.194 - 13.292: 99.0311% ( 24) 00:08:26.552 13.292 - 13.391: 99.1522% ( 20) 00:08:26.552 13.391 - 13.489: 99.2188% ( 11) 00:08:26.552 13.489 - 13.588: 99.2794% ( 10) 00:08:26.552 13.588 - 13.686: 99.3520% ( 12) 00:08:26.552 13.686 - 13.785: 99.4126% ( 10) 00:08:26.552 13.785 - 13.883: 99.4913% ( 13) 00:08:26.552 13.883 - 13.982: 99.5337% ( 7) 00:08:26.552 13.982 - 14.080: 99.5700% ( 6) 00:08:26.552 14.080 - 14.178: 99.6185% ( 8) 00:08:26.552 14.178 - 14.277: 99.6427% ( 4) 00:08:26.552 14.277 - 14.375: 99.6790% ( 6) 00:08:26.552 14.375 - 14.474: 99.7093% ( 5) 00:08:26.552 14.474 - 14.572: 99.7154% ( 1) 00:08:26.552 14.572 - 14.671: 99.7214% ( 1) 00:08:26.552 14.671 - 14.769: 99.7335% ( 2) 00:08:26.552 14.769 - 14.868: 99.7396% ( 1) 00:08:26.552 14.868 - 14.966: 99.7578% ( 3) 00:08:26.552 15.065 - 15.163: 99.7638% ( 1) 00:08:26.552 15.163 - 15.262: 99.7699% ( 1) 00:08:26.552 15.262 - 15.360: 99.7759% ( 1) 00:08:26.552 15.360 - 15.458: 99.7880% ( 2) 00:08:26.552 15.754 - 15.852: 99.8002% ( 2) 00:08:26.552 15.951 - 16.049: 99.8123% ( 2) 00:08:26.552 16.049 - 16.148: 99.8183% ( 1) 00:08:26.552 16.148 - 16.246: 99.8244% ( 1) 00:08:26.552 16.640 - 16.738: 99.8304% ( 1) 00:08:26.552 16.738 - 16.837: 99.8365% ( 1) 00:08:26.552 16.837 - 16.935: 99.8425% ( 1) 00:08:26.552 17.034 - 17.132: 99.8668% ( 4) 00:08:26.552 17.329 - 17.428: 99.8728% ( 1) 00:08:26.552 17.428 - 17.526: 99.8849% ( 2) 00:08:26.552 18.117 - 18.215: 99.8910% ( 1) 00:08:26.552 18.215 - 18.314: 99.8971% ( 1) 00:08:26.552 18.314 - 18.412: 99.9031% ( 1) 00:08:26.552 18.412 - 18.511: 99.9092% ( 1) 00:08:26.552 19.397 - 19.495: 99.9152% ( 1) 00:08:26.552 19.495 - 19.594: 99.9213% ( 1) 00:08:26.552 19.988 - 20.086: 99.9273% ( 1) 00:08:26.552 20.578 - 20.677: 99.9334% ( 1) 00:08:26.552 21.563 - 21.662: 99.9394% ( 1) 00:08:26.552 24.123 - 24.222: 99.9455% ( 1) 00:08:26.552 24.714 - 24.812: 99.9516% ( 1) 00:08:26.552 25.600 - 25.797: 99.9576% ( 1) 00:08:26.552 27.175 - 27.372: 99.9637% ( 1) 00:08:26.552 27.372 - 27.569: 99.9697% ( 1) 00:08:26.552 55.138 - 55.532: 99.9758% ( 1) 00:08:26.552 62.622 - 63.015: 99.9818% ( 1) 00:08:26.552 313.502 - 315.077: 99.9879% ( 1) 00:08:26.552 368.640 - 370.215: 99.9939% ( 1) 00:08:26.552 806.597 - 812.898: 100.0000% ( 1) 00:08:26.552 00:08:26.552 00:08:26.552 real 0m1.171s 00:08:26.552 user 0m1.056s 00:08:26.552 sys 0m0.075s 00:08:26.552 10:14:05 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.552 10:14:05 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:26.552 ************************************ 00:08:26.552 END TEST nvme_overhead 00:08:26.552 ************************************ 00:08:26.552 10:14:05 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:26.552 10:14:05 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:26.552 10:14:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.552 10:14:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.552 ************************************ 00:08:26.552 START TEST nvme_arbitration 00:08:26.552 ************************************ 00:08:26.552 10:14:05 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:29.838 Initializing NVMe Controllers 00:08:29.838 Attached to 0000:00:13.0 00:08:29.838 Attached to 0000:00:10.0 00:08:29.838 Attached to 0000:00:11.0 00:08:29.838 Attached to 0000:00:12.0 00:08:29.838 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:29.838 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:29.838 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:29.838 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:29.838 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:29.838 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:29.838 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:29.838 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:29.838 Initialization complete. Launching workers. 00:08:29.838 Starting thread on core 1 with urgent priority queue 00:08:29.838 Starting thread on core 2 with urgent priority queue 00:08:29.838 Starting thread on core 3 with urgent priority queue 00:08:29.838 Starting thread on core 0 with urgent priority queue 00:08:29.838 QEMU NVMe Ctrl (12343 ) core 0: 6912.00 IO/s 14.47 secs/100000 ios 00:08:29.838 QEMU NVMe Ctrl (12342 ) core 0: 6912.00 IO/s 14.47 secs/100000 ios 00:08:29.838 QEMU NVMe Ctrl (12340 ) core 1: 6976.00 IO/s 14.33 secs/100000 ios 00:08:29.838 QEMU NVMe Ctrl (12342 ) core 1: 6976.00 IO/s 14.33 secs/100000 ios 00:08:29.838 QEMU NVMe Ctrl (12341 ) core 2: 6613.33 IO/s 15.12 secs/100000 ios 00:08:29.838 QEMU NVMe Ctrl (12342 ) core 3: 6528.00 IO/s 15.32 secs/100000 ios 00:08:29.838 ======================================================== 00:08:29.838 00:08:29.838 ************************************ 00:08:29.838 END TEST nvme_arbitration 00:08:29.838 ************************************ 00:08:29.838 00:08:29.838 real 0m3.187s 00:08:29.838 user 0m9.032s 00:08:29.838 sys 0m0.070s 00:08:29.838 10:14:08 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.838 10:14:08 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:29.838 10:14:08 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:29.838 10:14:08 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:29.838 10:14:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.838 10:14:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.838 ************************************ 00:08:29.838 START TEST nvme_single_aen 00:08:29.838 ************************************ 00:08:29.838 10:14:08 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:29.838 Asynchronous Event Request test 00:08:29.838 Attached to 0000:00:13.0 00:08:29.838 Attached to 0000:00:10.0 00:08:29.838 Attached to 0000:00:11.0 00:08:29.838 Attached to 0000:00:12.0 00:08:29.838 Reset controller to setup AER completions for this process 00:08:29.838 Registering asynchronous event callbacks... 00:08:29.838 Getting orig temperature thresholds of all controllers 00:08:29.838 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:29.838 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:29.838 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:29.838 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:29.838 Setting all controllers temperature threshold low to trigger AER 00:08:29.838 Waiting for all controllers temperature threshold to be set lower 00:08:29.838 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:29.838 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:29.838 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:29.838 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:29.838 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:29.838 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:29.838 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:29.838 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:29.838 Waiting for all controllers to trigger AER and reset threshold 00:08:29.838 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.838 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.838 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.838 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.838 Cleaning up... 00:08:29.838 00:08:29.838 real 0m0.198s 00:08:29.838 user 0m0.062s 00:08:29.838 sys 0m0.090s 00:08:29.838 10:14:09 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.838 ************************************ 00:08:29.838 END TEST nvme_single_aen 00:08:29.838 ************************************ 00:08:29.838 10:14:09 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:29.838 10:14:09 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:29.838 10:14:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.838 10:14:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.838 10:14:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.838 ************************************ 00:08:29.838 START TEST nvme_doorbell_aers 00:08:29.838 ************************************ 00:08:29.838 10:14:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:29.838 10:14:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:29.838 10:14:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:29.838 10:14:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:29.838 10:14:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:29.838 10:14:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:29.838 10:14:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:29.838 10:14:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:29.838 10:14:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:29.838 10:14:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:29.838 10:14:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:29.838 10:14:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:29.838 10:14:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:29.838 10:14:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:30.099 [2024-11-29 10:14:09.492220] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74703) is not found. Dropping the request. 00:08:40.089 Executing: test_write_invalid_db 00:08:40.089 Waiting for AER completion... 00:08:40.089 Failure: test_write_invalid_db 00:08:40.089 00:08:40.089 Executing: test_invalid_db_write_overflow_sq 00:08:40.089 Waiting for AER completion... 00:08:40.089 Failure: test_invalid_db_write_overflow_sq 00:08:40.089 00:08:40.089 Executing: test_invalid_db_write_overflow_cq 00:08:40.089 Waiting for AER completion... 00:08:40.089 Failure: test_invalid_db_write_overflow_cq 00:08:40.089 00:08:40.089 10:14:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:40.089 10:14:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:40.089 [2024-11-29 10:14:19.508712] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74703) is not found. Dropping the request. 00:08:50.056 Executing: test_write_invalid_db 00:08:50.056 Waiting for AER completion... 00:08:50.056 Failure: test_write_invalid_db 00:08:50.056 00:08:50.056 Executing: test_invalid_db_write_overflow_sq 00:08:50.056 Waiting for AER completion... 00:08:50.056 Failure: test_invalid_db_write_overflow_sq 00:08:50.056 00:08:50.056 Executing: test_invalid_db_write_overflow_cq 00:08:50.056 Waiting for AER completion... 00:08:50.056 Failure: test_invalid_db_write_overflow_cq 00:08:50.056 00:08:50.056 10:14:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:50.056 10:14:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:50.313 [2024-11-29 10:14:29.524038] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74703) is not found. Dropping the request. 00:09:00.281 Executing: test_write_invalid_db 00:09:00.281 Waiting for AER completion... 00:09:00.281 Failure: test_write_invalid_db 00:09:00.281 00:09:00.281 Executing: test_invalid_db_write_overflow_sq 00:09:00.281 Waiting for AER completion... 00:09:00.281 Failure: test_invalid_db_write_overflow_sq 00:09:00.281 00:09:00.281 Executing: test_invalid_db_write_overflow_cq 00:09:00.281 Waiting for AER completion... 00:09:00.281 Failure: test_invalid_db_write_overflow_cq 00:09:00.281 00:09:00.281 10:14:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:00.281 10:14:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:00.281 [2024-11-29 10:14:39.563078] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74703) is not found. Dropping the request. 00:09:10.253 Executing: test_write_invalid_db 00:09:10.253 Waiting for AER completion... 00:09:10.253 Failure: test_write_invalid_db 00:09:10.253 00:09:10.253 Executing: test_invalid_db_write_overflow_sq 00:09:10.253 Waiting for AER completion... 00:09:10.253 Failure: test_invalid_db_write_overflow_sq 00:09:10.253 00:09:10.253 Executing: test_invalid_db_write_overflow_cq 00:09:10.253 Waiting for AER completion... 00:09:10.253 Failure: test_invalid_db_write_overflow_cq 00:09:10.253 00:09:10.253 00:09:10.253 real 0m40.189s 00:09:10.253 user 0m34.255s 00:09:10.253 sys 0m5.544s 00:09:10.253 ************************************ 00:09:10.253 END TEST nvme_doorbell_aers 00:09:10.253 ************************************ 00:09:10.253 10:14:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.253 10:14:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:10.253 10:14:49 nvme -- nvme/nvme.sh@97 -- # uname 00:09:10.253 10:14:49 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:10.253 10:14:49 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:10.253 10:14:49 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:10.253 10:14:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.253 10:14:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.253 ************************************ 00:09:10.253 START TEST nvme_multi_aen 00:09:10.253 ************************************ 00:09:10.253 10:14:49 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:10.253 [2024-11-29 10:14:49.589686] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74703) is not found. Dropping the request. 00:09:10.253 [2024-11-29 10:14:49.589742] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74703) is not found. Dropping the request. 00:09:10.253 [2024-11-29 10:14:49.589753] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74703) is not found. Dropping the request. 00:09:10.253 [2024-11-29 10:14:49.590961] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74703) is not found. Dropping the request. 00:09:10.253 [2024-11-29 10:14:49.590987] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74703) is not found. Dropping the request. 00:09:10.253 [2024-11-29 10:14:49.590995] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74703) is not found. Dropping the request. 00:09:10.253 [2024-11-29 10:14:49.591867] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74703) is not found. Dropping the request. 00:09:10.253 [2024-11-29 10:14:49.591888] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74703) is not found. Dropping the request. 00:09:10.253 [2024-11-29 10:14:49.591895] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74703) is not found. Dropping the request. 00:09:10.253 [2024-11-29 10:14:49.592744] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74703) is not found. Dropping the request. 00:09:10.253 [2024-11-29 10:14:49.592764] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74703) is not found. Dropping the request. 00:09:10.253 [2024-11-29 10:14:49.592771] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74703) is not found. Dropping the request. 00:09:10.253 Child process pid: 75229 00:09:10.512 [Child] Asynchronous Event Request test 00:09:10.512 [Child] Attached to 0000:00:13.0 00:09:10.512 [Child] Attached to 0000:00:10.0 00:09:10.512 [Child] Attached to 0000:00:11.0 00:09:10.512 [Child] Attached to 0000:00:12.0 00:09:10.512 [Child] Registering asynchronous event callbacks... 00:09:10.512 [Child] Getting orig temperature thresholds of all controllers 00:09:10.512 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.512 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.512 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.512 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.512 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:10.512 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.512 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.512 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.512 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.512 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.512 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.512 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.512 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.512 [Child] Cleaning up... 00:09:10.512 Asynchronous Event Request test 00:09:10.512 Attached to 0000:00:13.0 00:09:10.512 Attached to 0000:00:10.0 00:09:10.512 Attached to 0000:00:11.0 00:09:10.512 Attached to 0000:00:12.0 00:09:10.512 Reset controller to setup AER completions for this process 00:09:10.512 Registering asynchronous event callbacks... 00:09:10.512 Getting orig temperature thresholds of all controllers 00:09:10.512 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.512 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.512 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.512 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.512 Setting all controllers temperature threshold low to trigger AER 00:09:10.512 Waiting for all controllers temperature threshold to be set lower 00:09:10.512 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.512 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:10.512 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.512 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:10.512 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.512 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:10.512 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.512 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:10.512 Waiting for all controllers to trigger AER and reset threshold 00:09:10.512 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.512 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.512 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.512 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.512 Cleaning up... 00:09:10.512 00:09:10.512 real 0m0.372s 00:09:10.512 user 0m0.134s 00:09:10.512 sys 0m0.137s 00:09:10.512 10:14:49 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.512 10:14:49 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:10.512 ************************************ 00:09:10.512 END TEST nvme_multi_aen 00:09:10.512 ************************************ 00:09:10.512 10:14:49 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:10.512 10:14:49 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:10.512 10:14:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.512 10:14:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.512 ************************************ 00:09:10.512 START TEST nvme_startup 00:09:10.512 ************************************ 00:09:10.512 10:14:49 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:10.770 Initializing NVMe Controllers 00:09:10.770 Attached to 0000:00:13.0 00:09:10.770 Attached to 0000:00:10.0 00:09:10.770 Attached to 0000:00:11.0 00:09:10.770 Attached to 0000:00:12.0 00:09:10.770 Initialization complete. 00:09:10.770 Time used:107370.281 (us). 00:09:10.770 00:09:10.770 real 0m0.152s 00:09:10.770 user 0m0.059s 00:09:10.770 sys 0m0.059s 00:09:10.770 10:14:50 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.770 10:14:50 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:10.770 ************************************ 00:09:10.770 END TEST nvme_startup 00:09:10.770 ************************************ 00:09:10.770 10:14:50 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:10.770 10:14:50 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.770 10:14:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.770 10:14:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.771 ************************************ 00:09:10.771 START TEST nvme_multi_secondary 00:09:10.771 ************************************ 00:09:10.771 10:14:50 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:10.771 10:14:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75274 00:09:10.771 10:14:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:10.771 10:14:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75275 00:09:10.771 10:14:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:10.771 10:14:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:14.053 Initializing NVMe Controllers 00:09:14.053 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:14.053 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:14.053 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:14.053 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:14.053 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:14.053 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:14.053 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:14.053 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:14.053 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:14.053 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:14.053 Initialization complete. Launching workers. 00:09:14.053 ======================================================== 00:09:14.053 Latency(us) 00:09:14.053 Device Information : IOPS MiB/s Average min max 00:09:14.053 PCIE (0000:00:13.0) NSID 1 from core 1: 7904.66 30.88 2023.66 995.64 5446.24 00:09:14.053 PCIE (0000:00:10.0) NSID 1 from core 1: 7904.66 30.88 2022.83 1008.43 5944.15 00:09:14.053 PCIE (0000:00:11.0) NSID 1 from core 1: 7904.66 30.88 2023.88 1008.12 5249.88 00:09:14.053 PCIE (0000:00:12.0) NSID 1 from core 1: 7904.66 30.88 2023.99 1012.88 5389.91 00:09:14.053 PCIE (0000:00:12.0) NSID 2 from core 1: 7904.66 30.88 2024.03 996.53 5225.67 00:09:14.053 PCIE (0000:00:12.0) NSID 3 from core 1: 7904.66 30.88 2024.10 1039.00 5196.11 00:09:14.053 ======================================================== 00:09:14.053 Total : 47427.95 185.27 2023.75 995.64 5944.15 00:09:14.053 00:09:14.053 Initializing NVMe Controllers 00:09:14.053 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:14.053 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:14.053 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:14.053 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:14.053 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:14.053 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:14.053 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:14.053 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:14.053 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:14.053 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:14.053 Initialization complete. Launching workers. 00:09:14.053 ======================================================== 00:09:14.053 Latency(us) 00:09:14.053 Device Information : IOPS MiB/s Average min max 00:09:14.053 PCIE (0000:00:13.0) NSID 1 from core 2: 3415.12 13.34 4684.58 1037.08 13398.10 00:09:14.053 PCIE (0000:00:10.0) NSID 1 from core 2: 3415.12 13.34 4682.81 1152.30 13197.97 00:09:14.053 PCIE (0000:00:11.0) NSID 1 from core 2: 3415.12 13.34 4684.75 1141.03 13658.67 00:09:14.053 PCIE (0000:00:12.0) NSID 1 from core 2: 3415.12 13.34 4684.76 1130.68 13487.04 00:09:14.053 PCIE (0000:00:12.0) NSID 2 from core 2: 3415.12 13.34 4684.47 1021.66 12962.81 00:09:14.053 PCIE (0000:00:12.0) NSID 3 from core 2: 3415.12 13.34 4684.82 902.21 13706.59 00:09:14.053 ======================================================== 00:09:14.053 Total : 20490.72 80.04 4684.36 902.21 13706.59 00:09:14.053 00:09:14.053 10:14:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75274 00:09:15.954 Initializing NVMe Controllers 00:09:15.954 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:15.954 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:15.954 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:15.954 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:15.954 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:15.954 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:15.954 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:15.954 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:15.954 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:15.954 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:15.954 Initialization complete. Launching workers. 00:09:15.954 ======================================================== 00:09:15.954 Latency(us) 00:09:15.954 Device Information : IOPS MiB/s Average min max 00:09:15.954 PCIE (0000:00:13.0) NSID 1 from core 0: 11146.68 43.54 1435.02 664.96 5772.34 00:09:15.954 PCIE (0000:00:10.0) NSID 1 from core 0: 11134.08 43.49 1435.79 660.56 6116.92 00:09:15.954 PCIE (0000:00:11.0) NSID 1 from core 0: 11131.68 43.48 1436.96 657.41 7629.82 00:09:15.954 PCIE (0000:00:12.0) NSID 1 from core 0: 11146.28 43.54 1435.08 646.12 6038.06 00:09:15.954 PCIE (0000:00:12.0) NSID 2 from core 0: 11146.88 43.54 1434.99 654.65 5872.96 00:09:15.954 PCIE (0000:00:12.0) NSID 3 from core 0: 11142.88 43.53 1435.51 643.33 6036.13 00:09:15.954 ======================================================== 00:09:15.954 Total : 66848.46 261.13 1435.56 643.33 7629.82 00:09:15.954 00:09:15.954 10:14:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75275 00:09:15.954 10:14:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75344 00:09:15.954 10:14:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:15.954 10:14:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75345 00:09:15.954 10:14:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:15.954 10:14:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:19.234 Initializing NVMe Controllers 00:09:19.234 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:19.234 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:19.234 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:19.234 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:19.234 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:19.234 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:19.234 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:19.234 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:19.234 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:19.234 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:19.234 Initialization complete. Launching workers. 00:09:19.234 ======================================================== 00:09:19.234 Latency(us) 00:09:19.234 Device Information : IOPS MiB/s Average min max 00:09:19.234 PCIE (0000:00:13.0) NSID 1 from core 1: 8170.19 31.91 1957.79 732.60 5336.24 00:09:19.234 PCIE (0000:00:10.0) NSID 1 from core 1: 8170.19 31.91 1956.92 709.34 5334.47 00:09:19.234 PCIE (0000:00:11.0) NSID 1 from core 1: 8170.19 31.91 1957.90 733.89 5335.53 00:09:19.234 PCIE (0000:00:12.0) NSID 1 from core 1: 8170.19 31.91 1958.03 737.87 5644.60 00:09:19.234 PCIE (0000:00:12.0) NSID 2 from core 1: 8170.19 31.91 1958.05 719.96 5535.84 00:09:19.234 PCIE (0000:00:12.0) NSID 3 from core 1: 8170.19 31.91 1958.05 722.30 5976.81 00:09:19.234 ======================================================== 00:09:19.234 Total : 49021.13 191.49 1957.79 709.34 5976.81 00:09:19.234 00:09:19.234 Initializing NVMe Controllers 00:09:19.234 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:19.234 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:19.234 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:19.234 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:19.234 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:19.234 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:19.234 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:19.234 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:19.234 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:19.234 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:19.234 Initialization complete. Launching workers. 00:09:19.234 ======================================================== 00:09:19.234 Latency(us) 00:09:19.234 Device Information : IOPS MiB/s Average min max 00:09:19.234 PCIE (0000:00:13.0) NSID 1 from core 0: 8196.12 32.02 1951.68 718.29 6052.00 00:09:19.234 PCIE (0000:00:10.0) NSID 1 from core 0: 8196.12 32.02 1950.73 681.75 5952.66 00:09:19.234 PCIE (0000:00:11.0) NSID 1 from core 0: 8196.12 32.02 1951.63 691.52 5740.55 00:09:19.234 PCIE (0000:00:12.0) NSID 1 from core 0: 8196.12 32.02 1951.67 709.99 6181.24 00:09:19.234 PCIE (0000:00:12.0) NSID 2 from core 0: 8196.12 32.02 1951.64 718.37 6387.57 00:09:19.234 PCIE (0000:00:12.0) NSID 3 from core 0: 8196.12 32.02 1951.61 717.08 6445.91 00:09:19.234 ======================================================== 00:09:19.234 Total : 49176.74 192.10 1951.49 681.75 6445.91 00:09:19.234 00:09:21.132 Initializing NVMe Controllers 00:09:21.132 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:21.132 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:21.132 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:21.132 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:21.132 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:21.132 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:21.132 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:21.132 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:21.132 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:21.132 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:21.132 Initialization complete. Launching workers. 00:09:21.132 ======================================================== 00:09:21.132 Latency(us) 00:09:21.132 Device Information : IOPS MiB/s Average min max 00:09:21.132 PCIE (0000:00:13.0) NSID 1 from core 2: 4810.63 18.79 3325.55 743.36 12169.40 00:09:21.132 PCIE (0000:00:10.0) NSID 1 from core 2: 4810.63 18.79 3323.74 726.01 13089.60 00:09:21.132 PCIE (0000:00:11.0) NSID 1 from core 2: 4810.63 18.79 3323.81 673.97 12908.09 00:09:21.132 PCIE (0000:00:12.0) NSID 1 from core 2: 4810.63 18.79 3322.73 738.32 12979.95 00:09:21.132 PCIE (0000:00:12.0) NSID 2 from core 2: 4810.63 18.79 3322.64 739.31 12738.95 00:09:21.132 PCIE (0000:00:12.0) NSID 3 from core 2: 4810.63 18.79 3322.39 601.78 13433.93 00:09:21.132 ======================================================== 00:09:21.132 Total : 28863.77 112.75 3323.48 601.78 13433.93 00:09:21.132 00:09:21.393 ************************************ 00:09:21.393 END TEST nvme_multi_secondary 00:09:21.393 ************************************ 00:09:21.393 10:15:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75344 00:09:21.393 10:15:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75345 00:09:21.393 00:09:21.393 real 0m10.565s 00:09:21.393 user 0m18.289s 00:09:21.393 sys 0m0.517s 00:09:21.393 10:15:00 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.393 10:15:00 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:21.393 10:15:00 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:21.393 10:15:00 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:21.393 10:15:00 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74317 ]] 00:09:21.393 10:15:00 nvme -- common/autotest_common.sh@1094 -- # kill 74317 00:09:21.393 10:15:00 nvme -- common/autotest_common.sh@1095 -- # wait 74317 00:09:21.393 [2024-11-29 10:15:00.652495] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75228) is not found. Dropping the request. 00:09:21.393 [2024-11-29 10:15:00.653160] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75228) is not found. Dropping the request. 00:09:21.393 [2024-11-29 10:15:00.653213] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75228) is not found. Dropping the request. 00:09:21.393 [2024-11-29 10:15:00.653240] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75228) is not found. Dropping the request. 00:09:21.393 [2024-11-29 10:15:00.653960] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75228) is not found. Dropping the request. 00:09:21.393 [2024-11-29 10:15:00.654055] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75228) is not found. Dropping the request. 00:09:21.393 [2024-11-29 10:15:00.654078] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75228) is not found. Dropping the request. 00:09:21.393 [2024-11-29 10:15:00.654101] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75228) is not found. Dropping the request. 00:09:21.393 [2024-11-29 10:15:00.654859] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75228) is not found. Dropping the request. 00:09:21.393 [2024-11-29 10:15:00.654920] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75228) is not found. Dropping the request. 00:09:21.393 [2024-11-29 10:15:00.654941] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75228) is not found. Dropping the request. 00:09:21.393 [2024-11-29 10:15:00.654967] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75228) is not found. Dropping the request. 00:09:21.393 [2024-11-29 10:15:00.655702] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75228) is not found. Dropping the request. 00:09:21.393 [2024-11-29 10:15:00.655970] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75228) is not found. Dropping the request. 00:09:21.393 [2024-11-29 10:15:00.655999] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75228) is not found. Dropping the request. 00:09:21.393 [2024-11-29 10:15:00.656022] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75228) is not found. Dropping the request. 00:09:21.393 10:15:00 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:21.393 10:15:00 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:21.393 10:15:00 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:21.393 10:15:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.393 10:15:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.393 10:15:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:21.393 ************************************ 00:09:21.393 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:21.393 ************************************ 00:09:21.393 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:21.393 * Looking for test storage... 00:09:21.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:21.393 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:21.393 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:21.393 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.654 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:21.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.654 --rc genhtml_branch_coverage=1 00:09:21.654 --rc genhtml_function_coverage=1 00:09:21.654 --rc genhtml_legend=1 00:09:21.654 --rc geninfo_all_blocks=1 00:09:21.655 --rc geninfo_unexecuted_blocks=1 00:09:21.655 00:09:21.655 ' 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:21.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.655 --rc genhtml_branch_coverage=1 00:09:21.655 --rc genhtml_function_coverage=1 00:09:21.655 --rc genhtml_legend=1 00:09:21.655 --rc geninfo_all_blocks=1 00:09:21.655 --rc geninfo_unexecuted_blocks=1 00:09:21.655 00:09:21.655 ' 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:21.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.655 --rc genhtml_branch_coverage=1 00:09:21.655 --rc genhtml_function_coverage=1 00:09:21.655 --rc genhtml_legend=1 00:09:21.655 --rc geninfo_all_blocks=1 00:09:21.655 --rc geninfo_unexecuted_blocks=1 00:09:21.655 00:09:21.655 ' 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:21.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.655 --rc genhtml_branch_coverage=1 00:09:21.655 --rc genhtml_function_coverage=1 00:09:21.655 --rc genhtml_legend=1 00:09:21.655 --rc geninfo_all_blocks=1 00:09:21.655 --rc geninfo_unexecuted_blocks=1 00:09:21.655 00:09:21.655 ' 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:21.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75512 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75512 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 75512 ']' 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:21.655 10:15:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:21.655 [2024-11-29 10:15:01.028135] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:21.655 [2024-11-29 10:15:01.028271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75512 ] 00:09:21.914 [2024-11-29 10:15:01.182753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.914 [2024-11-29 10:15:01.204985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.914 [2024-11-29 10:15:01.205589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.914 [2024-11-29 10:15:01.205775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.914 [2024-11-29 10:15:01.205786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:22.480 nvme0n1 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_7L1P5.txt 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:22.480 true 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732875301 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75535 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:22.480 10:15:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:25.016 10:15:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:25.016 10:15:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.016 10:15:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:25.016 [2024-11-29 10:15:03.947702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:25.016 [2024-11-29 10:15:03.948564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:25.016 [2024-11-29 10:15:03.948686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:25.016 [2024-11-29 10:15:03.948769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:25.016 [2024-11-29 10:15:03.952467] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:25.016 10:15:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.016 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75535 00:09:25.016 10:15:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75535 00:09:25.016 10:15:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75535 00:09:25.016 10:15:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:25.016 10:15:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:25.016 10:15:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:25.016 10:15:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.016 10:15:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:25.016 10:15:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.016 10:15:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:25.016 10:15:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_7L1P5.txt 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_7L1P5.txt 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75512 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 75512 ']' 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 75512 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75512 00:09:25.016 killing process with pid 75512 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75512' 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 75512 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 75512 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:25.016 ************************************ 00:09:25.016 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:25.016 ************************************ 00:09:25.016 00:09:25.016 real 0m3.589s 00:09:25.016 user 0m12.824s 00:09:25.016 sys 0m0.449s 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.016 10:15:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:25.016 10:15:04 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:25.016 10:15:04 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:25.016 10:15:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.016 10:15:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.016 10:15:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.016 ************************************ 00:09:25.016 START TEST nvme_fio 00:09:25.016 ************************************ 00:09:25.016 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:25.016 10:15:04 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:25.016 10:15:04 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:25.016 10:15:04 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:25.016 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:25.016 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:25.017 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:25.017 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:25.017 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:25.017 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:25.017 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:25.017 10:15:04 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:25.017 10:15:04 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:25.017 10:15:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:25.017 10:15:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:25.017 10:15:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:25.275 10:15:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:25.275 10:15:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:25.534 10:15:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:25.534 10:15:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:25.534 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:25.534 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:25.534 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:25.534 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:25.534 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:25.534 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:25.534 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:25.534 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:25.534 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:25.534 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:25.534 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:25.534 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:25.534 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:25.534 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:25.534 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:25.534 10:15:04 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:25.803 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:25.803 fio-3.35 00:09:25.803 Starting 1 thread 00:09:32.375 00:09:32.375 test: (groupid=0, jobs=1): err= 0: pid=75658: Fri Nov 29 10:15:10 2024 00:09:32.375 read: IOPS=22.1k, BW=86.3MiB/s (90.5MB/s)(173MiB/2001msec) 00:09:32.376 slat (nsec): min=3344, max=51841, avg=5323.40, stdev=2522.75 00:09:32.376 clat (usec): min=221, max=9770, avg=2893.55, stdev=980.55 00:09:32.376 lat (usec): min=226, max=9811, avg=2898.87, stdev=982.13 00:09:32.376 clat percentiles (usec): 00:09:32.376 | 1.00th=[ 1795], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:32.376 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:09:32.376 | 70.00th=[ 2704], 80.00th=[ 3032], 90.00th=[ 4293], 95.00th=[ 5473], 00:09:32.376 | 99.00th=[ 6521], 99.50th=[ 6783], 99.90th=[ 7767], 99.95th=[ 7963], 00:09:32.376 | 99.99th=[ 9503] 00:09:32.376 bw ( KiB/s): min=79920, max=91728, per=98.40%, avg=86949.33, stdev=6217.42, samples=3 00:09:32.376 iops : min=19980, max=22932, avg=21737.33, stdev=1554.36, samples=3 00:09:32.376 write: IOPS=21.9k, BW=85.7MiB/s (89.9MB/s)(172MiB/2001msec); 0 zone resets 00:09:32.376 slat (nsec): min=3498, max=71155, avg=5601.81, stdev=2632.22 00:09:32.376 clat (usec): min=237, max=9601, avg=2900.25, stdev=977.87 00:09:32.376 lat (usec): min=242, max=9611, avg=2905.86, stdev=979.46 00:09:32.376 clat percentiles (usec): 00:09:32.376 | 1.00th=[ 1795], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:32.376 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:09:32.376 | 70.00th=[ 2704], 80.00th=[ 3064], 90.00th=[ 4293], 95.00th=[ 5407], 00:09:32.376 | 99.00th=[ 6521], 99.50th=[ 6718], 99.90th=[ 7767], 99.95th=[ 8029], 00:09:32.376 | 99.99th=[ 9241] 00:09:32.376 bw ( KiB/s): min=81488, max=91224, per=99.28%, avg=87146.67, stdev=5056.96, samples=3 00:09:32.376 iops : min=20372, max=22806, avg=21786.67, stdev=1264.24, samples=3 00:09:32.376 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:09:32.376 lat (msec) : 2=1.99%, 4=86.56%, 10=11.40% 00:09:32.376 cpu : usr=99.15%, sys=0.05%, ctx=3, majf=0, minf=625 00:09:32.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:32.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.376 issued rwts: total=44202,43912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.376 00:09:32.376 Run status group 0 (all jobs): 00:09:32.376 READ: bw=86.3MiB/s (90.5MB/s), 86.3MiB/s-86.3MiB/s (90.5MB/s-90.5MB/s), io=173MiB (181MB), run=2001-2001msec 00:09:32.376 WRITE: bw=85.7MiB/s (89.9MB/s), 85.7MiB/s-85.7MiB/s (89.9MB/s-89.9MB/s), io=172MiB (180MB), run=2001-2001msec 00:09:32.376 ----------------------------------------------------- 00:09:32.376 Suppressions used: 00:09:32.376 count bytes template 00:09:32.376 1 32 /usr/src/fio/parse.c 00:09:32.376 1 8 libtcmalloc_minimal.so 00:09:32.376 ----------------------------------------------------- 00:09:32.376 00:09:32.376 10:15:10 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:32.376 10:15:10 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:32.376 10:15:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:32.376 10:15:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:32.376 10:15:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:32.376 10:15:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:32.376 10:15:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:32.376 10:15:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:32.376 10:15:11 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:32.376 10:15:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:32.376 10:15:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:32.376 10:15:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:32.376 10:15:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:32.376 10:15:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:32.376 10:15:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:32.376 10:15:11 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:32.376 10:15:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:32.376 10:15:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:32.376 10:15:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:32.376 10:15:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:32.376 10:15:11 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:32.376 10:15:11 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:32.376 10:15:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:32.376 10:15:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:32.376 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:32.376 fio-3.35 00:09:32.376 Starting 1 thread 00:09:39.011 00:09:39.011 test: (groupid=0, jobs=1): err= 0: pid=75714: Fri Nov 29 10:15:17 2024 00:09:39.011 read: IOPS=20.9k, BW=81.7MiB/s (85.7MB/s)(163MiB/2001msec) 00:09:39.011 slat (nsec): min=3396, max=73306, avg=5112.50, stdev=2217.33 00:09:39.011 clat (usec): min=409, max=11700, avg=3050.17, stdev=1003.38 00:09:39.011 lat (usec): min=413, max=11753, avg=3055.29, stdev=1004.20 00:09:39.011 clat percentiles (usec): 00:09:39.011 | 1.00th=[ 1762], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2409], 00:09:39.011 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2868], 00:09:39.011 | 70.00th=[ 3097], 80.00th=[ 3523], 90.00th=[ 4490], 95.00th=[ 5211], 00:09:39.011 | 99.00th=[ 6521], 99.50th=[ 7177], 99.90th=[ 8455], 99.95th=[10683], 00:09:39.011 | 99.99th=[11600] 00:09:39.011 bw ( KiB/s): min=83256, max=86928, per=100.00%, avg=84986.67, stdev=1845.04, samples=3 00:09:39.011 iops : min=20814, max=21732, avg=21246.67, stdev=461.26, samples=3 00:09:39.011 write: IOPS=20.8k, BW=81.3MiB/s (85.3MB/s)(163MiB/2001msec); 0 zone resets 00:09:39.011 slat (nsec): min=3460, max=73817, avg=5220.87, stdev=2331.37 00:09:39.011 clat (usec): min=417, max=11611, avg=3064.73, stdev=1003.65 00:09:39.011 lat (usec): min=421, max=11622, avg=3069.96, stdev=1004.47 00:09:39.011 clat percentiles (usec): 00:09:39.011 | 1.00th=[ 1795], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2409], 00:09:39.011 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2868], 00:09:39.011 | 70.00th=[ 3097], 80.00th=[ 3556], 90.00th=[ 4555], 95.00th=[ 5211], 00:09:39.011 | 99.00th=[ 6587], 99.50th=[ 7177], 99.90th=[ 8586], 99.95th=[10945], 00:09:39.011 | 99.99th=[11469] 00:09:39.011 bw ( KiB/s): min=83944, max=86616, per=100.00%, avg=85120.00, stdev=1364.44, samples=3 00:09:39.011 iops : min=20986, max=21654, avg=21280.00, stdev=341.11, samples=3 00:09:39.011 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:09:39.011 lat (msec) : 2=1.94%, 4=83.08%, 10=14.88%, 20=0.07% 00:09:39.011 cpu : usr=99.05%, sys=0.10%, ctx=4, majf=0, minf=626 00:09:39.011 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:39.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.011 issued rwts: total=41853,41661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.011 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.011 00:09:39.011 Run status group 0 (all jobs): 00:09:39.011 READ: bw=81.7MiB/s (85.7MB/s), 81.7MiB/s-81.7MiB/s (85.7MB/s-85.7MB/s), io=163MiB (171MB), run=2001-2001msec 00:09:39.011 WRITE: bw=81.3MiB/s (85.3MB/s), 81.3MiB/s-81.3MiB/s (85.3MB/s-85.3MB/s), io=163MiB (171MB), run=2001-2001msec 00:09:39.011 ----------------------------------------------------- 00:09:39.011 Suppressions used: 00:09:39.011 count bytes template 00:09:39.011 1 32 /usr/src/fio/parse.c 00:09:39.011 1 8 libtcmalloc_minimal.so 00:09:39.011 ----------------------------------------------------- 00:09:39.011 00:09:39.011 10:15:18 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:39.011 10:15:18 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:39.011 10:15:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:39.011 10:15:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:39.011 10:15:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:39.011 10:15:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:39.272 10:15:18 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:39.272 10:15:18 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:39.272 10:15:18 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:39.272 10:15:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:39.272 10:15:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:39.272 10:15:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:39.272 10:15:18 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:39.272 10:15:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:39.272 10:15:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:39.272 10:15:18 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:39.272 10:15:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:39.272 10:15:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:39.272 10:15:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:39.272 10:15:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:39.272 10:15:18 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:39.272 10:15:18 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:39.272 10:15:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:39.272 10:15:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:39.533 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:39.533 fio-3.35 00:09:39.533 Starting 1 thread 00:09:46.123 00:09:46.123 test: (groupid=0, jobs=1): err= 0: pid=75775: Fri Nov 29 10:15:24 2024 00:09:46.123 read: IOPS=16.5k, BW=64.5MiB/s (67.6MB/s)(129MiB/2001msec) 00:09:46.123 slat (usec): min=4, max=119, avg= 6.56, stdev= 3.65 00:09:46.123 clat (usec): min=272, max=13922, avg=3846.11, stdev=1307.10 00:09:46.123 lat (usec): min=278, max=13960, avg=3852.67, stdev=1308.62 00:09:46.123 clat percentiles (usec): 00:09:46.123 | 1.00th=[ 2409], 5.00th=[ 2704], 10.00th=[ 2802], 20.00th=[ 2966], 00:09:46.123 | 30.00th=[ 3064], 40.00th=[ 3195], 50.00th=[ 3326], 60.00th=[ 3490], 00:09:46.123 | 70.00th=[ 3884], 80.00th=[ 4752], 90.00th=[ 5866], 95.00th=[ 6849], 00:09:46.123 | 99.00th=[ 8029], 99.50th=[ 8455], 99.90th=[ 9372], 99.95th=[10421], 00:09:46.123 | 99.99th=[13698] 00:09:46.123 bw ( KiB/s): min=64712, max=68368, per=100.00%, avg=66912.00, stdev=1938.23, samples=3 00:09:46.123 iops : min=16178, max=17092, avg=16728.00, stdev=484.56, samples=3 00:09:46.123 write: IOPS=16.5k, BW=64.6MiB/s (67.8MB/s)(129MiB/2001msec); 0 zone resets 00:09:46.123 slat (usec): min=5, max=129, avg= 6.70, stdev= 3.65 00:09:46.123 clat (usec): min=361, max=13812, avg=3874.73, stdev=1294.44 00:09:46.123 lat (usec): min=367, max=13823, avg=3881.43, stdev=1295.90 00:09:46.123 clat percentiles (usec): 00:09:46.123 | 1.00th=[ 2442], 5.00th=[ 2737], 10.00th=[ 2868], 20.00th=[ 2999], 00:09:46.123 | 30.00th=[ 3097], 40.00th=[ 3228], 50.00th=[ 3359], 60.00th=[ 3523], 00:09:46.123 | 70.00th=[ 3884], 80.00th=[ 4752], 90.00th=[ 5866], 95.00th=[ 6849], 00:09:46.123 | 99.00th=[ 7963], 99.50th=[ 8455], 99.90th=[ 9503], 99.95th=[10945], 00:09:46.123 | 99.99th=[12780] 00:09:46.123 bw ( KiB/s): min=64600, max=68184, per=100.00%, avg=66864.00, stdev=1969.67, samples=3 00:09:46.123 iops : min=16150, max=17046, avg=16716.00, stdev=492.42, samples=3 00:09:46.123 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:46.123 lat (msec) : 2=0.29%, 4=71.29%, 10=28.31%, 20=0.07% 00:09:46.123 cpu : usr=98.75%, sys=0.15%, ctx=5, majf=0, minf=624 00:09:46.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:46.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.123 issued rwts: total=33035,33105,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.123 00:09:46.123 Run status group 0 (all jobs): 00:09:46.123 READ: bw=64.5MiB/s (67.6MB/s), 64.5MiB/s-64.5MiB/s (67.6MB/s-67.6MB/s), io=129MiB (135MB), run=2001-2001msec 00:09:46.123 WRITE: bw=64.6MiB/s (67.8MB/s), 64.6MiB/s-64.6MiB/s (67.8MB/s-67.8MB/s), io=129MiB (136MB), run=2001-2001msec 00:09:46.123 ----------------------------------------------------- 00:09:46.123 Suppressions used: 00:09:46.123 count bytes template 00:09:46.123 1 32 /usr/src/fio/parse.c 00:09:46.123 1 8 libtcmalloc_minimal.so 00:09:46.123 ----------------------------------------------------- 00:09:46.123 00:09:46.123 10:15:24 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:46.123 10:15:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:46.123 10:15:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:46.123 10:15:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:46.123 10:15:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:46.123 10:15:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:46.123 10:15:24 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:46.123 10:15:24 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:46.123 10:15:24 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:46.123 10:15:24 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:46.123 10:15:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:46.123 10:15:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:46.123 10:15:24 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:46.123 10:15:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:46.123 10:15:24 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:46.123 10:15:24 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:46.123 10:15:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:46.123 10:15:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:46.123 10:15:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:46.123 10:15:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:46.123 10:15:25 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:46.123 10:15:25 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:46.123 10:15:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:46.123 10:15:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:46.123 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:46.123 fio-3.35 00:09:46.123 Starting 1 thread 00:09:50.335 00:09:50.335 test: (groupid=0, jobs=1): err= 0: pid=75830: Fri Nov 29 10:15:29 2024 00:09:50.335 read: IOPS=14.5k, BW=56.7MiB/s (59.4MB/s)(113MiB/2001msec) 00:09:50.335 slat (nsec): min=4870, max=98087, avg=7207.55, stdev=4440.13 00:09:50.335 clat (usec): min=328, max=11644, avg=4374.35, stdev=1508.37 00:09:50.335 lat (usec): min=335, max=11651, avg=4381.56, stdev=1509.90 00:09:50.335 clat percentiles (usec): 00:09:50.335 | 1.00th=[ 2376], 5.00th=[ 2737], 10.00th=[ 2868], 20.00th=[ 3064], 00:09:50.335 | 30.00th=[ 3261], 40.00th=[ 3490], 50.00th=[ 3851], 60.00th=[ 4490], 00:09:50.335 | 70.00th=[ 5145], 80.00th=[ 5735], 90.00th=[ 6521], 95.00th=[ 7308], 00:09:50.335 | 99.00th=[ 8455], 99.50th=[ 8848], 99.90th=[10159], 99.95th=[10945], 00:09:50.335 | 99.99th=[11338] 00:09:50.335 bw ( KiB/s): min=56560, max=61208, per=100.00%, avg=59037.33, stdev=2339.13, samples=3 00:09:50.335 iops : min=14140, max=15302, avg=14759.33, stdev=584.78, samples=3 00:09:50.335 write: IOPS=14.5k, BW=56.8MiB/s (59.5MB/s)(114MiB/2001msec); 0 zone resets 00:09:50.335 slat (nsec): min=4993, max=85518, avg=7390.31, stdev=4470.51 00:09:50.335 clat (usec): min=268, max=11584, avg=4406.78, stdev=1513.78 00:09:50.335 lat (usec): min=277, max=11591, avg=4414.17, stdev=1515.32 00:09:50.335 clat percentiles (usec): 00:09:50.335 | 1.00th=[ 2442], 5.00th=[ 2769], 10.00th=[ 2900], 20.00th=[ 3097], 00:09:50.335 | 30.00th=[ 3294], 40.00th=[ 3490], 50.00th=[ 3884], 60.00th=[ 4555], 00:09:50.335 | 70.00th=[ 5145], 80.00th=[ 5800], 90.00th=[ 6587], 95.00th=[ 7308], 00:09:50.335 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[10028], 99.95th=[10683], 00:09:50.335 | 99.99th=[11338] 00:09:50.335 bw ( KiB/s): min=56848, max=60288, per=100.00%, avg=58866.67, stdev=1796.11, samples=3 00:09:50.335 iops : min=14212, max=15072, avg=14716.67, stdev=449.03, samples=3 00:09:50.335 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:09:50.335 lat (msec) : 2=0.32%, 4=52.06%, 10=47.47%, 20=0.11% 00:09:50.335 cpu : usr=98.45%, sys=0.05%, ctx=3, majf=0, minf=623 00:09:50.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:50.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.335 issued rwts: total=29040,29090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.335 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.335 00:09:50.335 Run status group 0 (all jobs): 00:09:50.335 READ: bw=56.7MiB/s (59.4MB/s), 56.7MiB/s-56.7MiB/s (59.4MB/s-59.4MB/s), io=113MiB (119MB), run=2001-2001msec 00:09:50.335 WRITE: bw=56.8MiB/s (59.5MB/s), 56.8MiB/s-56.8MiB/s (59.5MB/s-59.5MB/s), io=114MiB (119MB), run=2001-2001msec 00:09:50.335 ----------------------------------------------------- 00:09:50.335 Suppressions used: 00:09:50.335 count bytes template 00:09:50.335 1 32 /usr/src/fio/parse.c 00:09:50.335 1 8 libtcmalloc_minimal.so 00:09:50.335 ----------------------------------------------------- 00:09:50.335 00:09:50.335 ************************************ 00:09:50.335 END TEST nvme_fio 00:09:50.335 ************************************ 00:09:50.335 10:15:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:50.335 10:15:29 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:50.335 00:09:50.335 real 0m25.275s 00:09:50.335 user 0m19.967s 00:09:50.335 sys 0m6.968s 00:09:50.335 10:15:29 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.335 10:15:29 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:50.335 ************************************ 00:09:50.335 END TEST nvme 00:09:50.335 ************************************ 00:09:50.336 00:09:50.336 real 1m32.568s 00:09:50.336 user 3m34.881s 00:09:50.336 sys 0m16.669s 00:09:50.336 10:15:29 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.336 10:15:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:50.336 10:15:29 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:50.336 10:15:29 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:50.336 10:15:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.336 10:15:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.336 10:15:29 -- common/autotest_common.sh@10 -- # set +x 00:09:50.336 ************************************ 00:09:50.336 START TEST nvme_scc 00:09:50.336 ************************************ 00:09:50.336 10:15:29 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:50.598 * Looking for test storage... 00:09:50.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:50.598 10:15:29 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.598 10:15:29 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.598 10:15:29 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.598 10:15:29 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:50.598 10:15:29 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.598 10:15:29 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.598 --rc genhtml_branch_coverage=1 00:09:50.598 --rc genhtml_function_coverage=1 00:09:50.598 --rc genhtml_legend=1 00:09:50.598 --rc geninfo_all_blocks=1 00:09:50.598 --rc geninfo_unexecuted_blocks=1 00:09:50.598 00:09:50.598 ' 00:09:50.598 10:15:29 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.598 --rc genhtml_branch_coverage=1 00:09:50.598 --rc genhtml_function_coverage=1 00:09:50.598 --rc genhtml_legend=1 00:09:50.598 --rc geninfo_all_blocks=1 00:09:50.598 --rc geninfo_unexecuted_blocks=1 00:09:50.598 00:09:50.598 ' 00:09:50.598 10:15:29 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.598 --rc genhtml_branch_coverage=1 00:09:50.598 --rc genhtml_function_coverage=1 00:09:50.598 --rc genhtml_legend=1 00:09:50.598 --rc geninfo_all_blocks=1 00:09:50.598 --rc geninfo_unexecuted_blocks=1 00:09:50.598 00:09:50.598 ' 00:09:50.598 10:15:29 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.598 --rc genhtml_branch_coverage=1 00:09:50.598 --rc genhtml_function_coverage=1 00:09:50.598 --rc genhtml_legend=1 00:09:50.598 --rc geninfo_all_blocks=1 00:09:50.598 --rc geninfo_unexecuted_blocks=1 00:09:50.598 00:09:50.598 ' 00:09:50.598 10:15:29 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:50.598 10:15:29 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:50.598 10:15:29 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:50.598 10:15:29 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:50.598 10:15:29 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.598 10:15:29 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.598 10:15:29 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.598 10:15:29 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.598 10:15:29 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.598 10:15:29 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:50.598 10:15:29 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.598 10:15:29 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:50.598 10:15:29 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:50.598 10:15:29 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:50.598 10:15:29 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:50.598 10:15:29 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:50.598 10:15:29 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:50.598 10:15:29 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:50.598 10:15:29 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:50.598 10:15:29 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:50.598 10:15:29 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:50.598 10:15:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:50.598 10:15:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:50.598 10:15:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:50.599 10:15:29 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:50.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:51.122 Waiting for block devices as requested 00:09:51.122 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.122 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.383 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.383 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:56.686 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:56.686 10:15:35 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:56.686 10:15:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:56.686 10:15:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:56.686 10:15:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.686 10:15:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.686 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.687 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.688 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.689 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.690 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:56.691 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:56.692 10:15:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:56.692 10:15:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:56.692 10:15:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.692 10:15:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:56.692 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:56.693 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.694 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.695 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.696 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.697 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:56.698 10:15:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:56.698 10:15:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:56.698 10:15:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.698 10:15:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.698 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.699 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.700 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:56.701 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.702 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:56.703 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:56.704 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.705 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.706 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.707 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.708 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.709 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.710 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:56.711 10:15:36 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:56.711 10:15:36 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:56.711 10:15:36 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.711 10:15:36 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.711 10:15:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:56.972 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.972 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.972 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.972 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.972 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:56.975 10:15:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:56.975 10:15:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:56.976 10:15:36 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:56.976 10:15:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:56.976 10:15:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:56.976 10:15:36 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:57.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:57.807 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.807 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.807 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.807 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.070 10:15:37 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:58.070 10:15:37 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:58.070 10:15:37 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.070 10:15:37 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:58.070 ************************************ 00:09:58.070 START TEST nvme_simple_copy 00:09:58.070 ************************************ 00:09:58.070 10:15:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:58.332 Initializing NVMe Controllers 00:09:58.332 Attaching to 0000:00:10.0 00:09:58.332 Controller supports SCC. Attached to 0000:00:10.0 00:09:58.332 Namespace ID: 1 size: 6GB 00:09:58.332 Initialization complete. 00:09:58.332 00:09:58.332 Controller QEMU NVMe Ctrl (12340 ) 00:09:58.332 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:58.332 Namespace Block Size:4096 00:09:58.332 Writing LBAs 0 to 63 with Random Data 00:09:58.332 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:58.332 LBAs matching Written Data: 64 00:09:58.332 00:09:58.332 real 0m0.257s 00:09:58.332 user 0m0.101s 00:09:58.332 sys 0m0.053s 00:09:58.332 10:15:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.332 ************************************ 00:09:58.332 END TEST nvme_simple_copy 00:09:58.332 ************************************ 00:09:58.332 10:15:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:58.332 00:09:58.332 real 0m7.842s 00:09:58.332 user 0m1.136s 00:09:58.332 sys 0m1.475s 00:09:58.332 10:15:37 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.332 ************************************ 00:09:58.332 END TEST nvme_scc 00:09:58.332 ************************************ 00:09:58.332 10:15:37 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:58.332 10:15:37 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:58.332 10:15:37 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:58.332 10:15:37 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:58.332 10:15:37 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:58.332 10:15:37 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:58.332 10:15:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.332 10:15:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.332 10:15:37 -- common/autotest_common.sh@10 -- # set +x 00:09:58.332 ************************************ 00:09:58.332 START TEST nvme_fdp 00:09:58.332 ************************************ 00:09:58.332 10:15:37 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:58.332 * Looking for test storage... 00:09:58.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:58.332 10:15:37 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:58.332 10:15:37 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:58.332 10:15:37 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:58.593 10:15:37 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.593 10:15:37 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:58.593 10:15:37 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.593 10:15:37 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:58.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.593 --rc genhtml_branch_coverage=1 00:09:58.593 --rc genhtml_function_coverage=1 00:09:58.593 --rc genhtml_legend=1 00:09:58.594 --rc geninfo_all_blocks=1 00:09:58.594 --rc geninfo_unexecuted_blocks=1 00:09:58.594 00:09:58.594 ' 00:09:58.594 10:15:37 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:58.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.594 --rc genhtml_branch_coverage=1 00:09:58.594 --rc genhtml_function_coverage=1 00:09:58.594 --rc genhtml_legend=1 00:09:58.594 --rc geninfo_all_blocks=1 00:09:58.594 --rc geninfo_unexecuted_blocks=1 00:09:58.594 00:09:58.594 ' 00:09:58.594 10:15:37 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:58.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.594 --rc genhtml_branch_coverage=1 00:09:58.594 --rc genhtml_function_coverage=1 00:09:58.594 --rc genhtml_legend=1 00:09:58.594 --rc geninfo_all_blocks=1 00:09:58.594 --rc geninfo_unexecuted_blocks=1 00:09:58.594 00:09:58.594 ' 00:09:58.594 10:15:37 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:58.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.594 --rc genhtml_branch_coverage=1 00:09:58.594 --rc genhtml_function_coverage=1 00:09:58.594 --rc genhtml_legend=1 00:09:58.594 --rc geninfo_all_blocks=1 00:09:58.594 --rc geninfo_unexecuted_blocks=1 00:09:58.594 00:09:58.594 ' 00:09:58.594 10:15:37 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:58.594 10:15:37 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:58.594 10:15:37 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:58.594 10:15:37 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:58.594 10:15:37 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.594 10:15:37 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.594 10:15:37 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.594 10:15:37 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.594 10:15:37 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.594 10:15:37 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.594 10:15:37 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.594 10:15:37 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.594 10:15:37 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:58.594 10:15:37 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.594 10:15:37 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:58.594 10:15:37 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:58.594 10:15:37 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:58.594 10:15:37 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:58.594 10:15:37 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:58.594 10:15:37 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:58.594 10:15:37 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:58.594 10:15:37 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:58.594 10:15:37 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:58.594 10:15:37 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:58.594 10:15:37 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:58.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:58.855 Waiting for block devices as requested 00:09:59.115 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.115 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.115 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.375 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:04.697 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:04.697 10:15:43 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:04.697 10:15:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:04.697 10:15:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:04.697 10:15:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:04.697 10:15:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.697 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:04.698 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.699 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.700 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:04.701 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.702 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.703 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:04.704 10:15:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:04.704 10:15:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:04.704 10:15:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:04.704 10:15:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.704 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.705 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:04.706 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.707 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:04.708 10:15:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:04.710 10:15:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:04.710 10:15:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:04.710 10:15:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:04.710 10:15:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:04.713 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:04.715 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:04.717 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:04.718 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:04.722 10:15:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:04.723 10:15:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:04.723 10:15:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:04.723 10:15:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:04.723 10:15:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:04.723 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:04.726 10:15:44 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:04.726 10:15:44 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:04.727 10:15:44 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:04.727 10:15:44 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:04.727 10:15:44 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:04.727 10:15:44 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:05.306 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:05.874 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.874 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.874 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.874 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.874 10:15:45 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:05.874 10:15:45 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:05.874 10:15:45 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.874 10:15:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:05.874 ************************************ 00:10:05.874 START TEST nvme_flexible_data_placement 00:10:05.874 ************************************ 00:10:05.874 10:15:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:06.134 Initializing NVMe Controllers 00:10:06.134 Attaching to 0000:00:13.0 00:10:06.134 Controller supports FDP Attached to 0000:00:13.0 00:10:06.134 Namespace ID: 1 Endurance Group ID: 1 00:10:06.134 Initialization complete. 00:10:06.134 00:10:06.134 ================================== 00:10:06.134 == FDP tests for Namespace: #01 == 00:10:06.134 ================================== 00:10:06.134 00:10:06.134 Get Feature: FDP: 00:10:06.134 ================= 00:10:06.134 Enabled: Yes 00:10:06.134 FDP configuration Index: 0 00:10:06.134 00:10:06.134 FDP configurations log page 00:10:06.134 =========================== 00:10:06.134 Number of FDP configurations: 1 00:10:06.134 Version: 0 00:10:06.134 Size: 112 00:10:06.134 FDP Configuration Descriptor: 0 00:10:06.134 Descriptor Size: 96 00:10:06.134 Reclaim Group Identifier format: 2 00:10:06.134 FDP Volatile Write Cache: Not Present 00:10:06.134 FDP Configuration: Valid 00:10:06.134 Vendor Specific Size: 0 00:10:06.134 Number of Reclaim Groups: 2 00:10:06.134 Number of Recalim Unit Handles: 8 00:10:06.134 Max Placement Identifiers: 128 00:10:06.134 Number of Namespaces Suppprted: 256 00:10:06.134 Reclaim unit Nominal Size: 6000000 bytes 00:10:06.134 Estimated Reclaim Unit Time Limit: Not Reported 00:10:06.134 RUH Desc #000: RUH Type: Initially Isolated 00:10:06.134 RUH Desc #001: RUH Type: Initially Isolated 00:10:06.134 RUH Desc #002: RUH Type: Initially Isolated 00:10:06.134 RUH Desc #003: RUH Type: Initially Isolated 00:10:06.134 RUH Desc #004: RUH Type: Initially Isolated 00:10:06.134 RUH Desc #005: RUH Type: Initially Isolated 00:10:06.134 RUH Desc #006: RUH Type: Initially Isolated 00:10:06.134 RUH Desc #007: RUH Type: Initially Isolated 00:10:06.134 00:10:06.134 FDP reclaim unit handle usage log page 00:10:06.134 ====================================== 00:10:06.134 Number of Reclaim Unit Handles: 8 00:10:06.134 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:06.134 RUH Usage Desc #001: RUH Attributes: Unused 00:10:06.134 RUH Usage Desc #002: RUH Attributes: Unused 00:10:06.134 RUH Usage Desc #003: RUH Attributes: Unused 00:10:06.134 RUH Usage Desc #004: RUH Attributes: Unused 00:10:06.134 RUH Usage Desc #005: RUH Attributes: Unused 00:10:06.134 RUH Usage Desc #006: RUH Attributes: Unused 00:10:06.134 RUH Usage Desc #007: RUH Attributes: Unused 00:10:06.134 00:10:06.134 FDP statistics log page 00:10:06.134 ======================= 00:10:06.134 Host bytes with metadata written: 2181615616 00:10:06.134 Media bytes with metadata written: 2182909952 00:10:06.134 Media bytes erased: 0 00:10:06.134 00:10:06.134 FDP Reclaim unit handle status 00:10:06.134 ============================== 00:10:06.134 Number of RUHS descriptors: 2 00:10:06.134 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001f73 00:10:06.134 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:06.134 00:10:06.134 FDP write on placement id: 0 success 00:10:06.134 00:10:06.134 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:06.134 00:10:06.134 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:06.134 00:10:06.134 Get Feature: FDP Events for Placement handle: #0 00:10:06.134 ======================== 00:10:06.134 Number of FDP Events: 6 00:10:06.134 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:06.134 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:06.134 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:06.134 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:06.134 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:06.134 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:06.134 00:10:06.134 FDP events log page 00:10:06.134 =================== 00:10:06.134 Number of FDP events: 1 00:10:06.134 FDP Event #0: 00:10:06.134 Event Type: RU Not Written to Capacity 00:10:06.134 Placement Identifier: Valid 00:10:06.134 NSID: Valid 00:10:06.134 Location: Valid 00:10:06.134 Placement Identifier: 0 00:10:06.134 Event Timestamp: 5 00:10:06.134 Namespace Identifier: 1 00:10:06.134 Reclaim Group Identifier: 0 00:10:06.134 Reclaim Unit Handle Identifier: 0 00:10:06.134 00:10:06.134 FDP test passed 00:10:06.134 00:10:06.134 real 0m0.220s 00:10:06.134 user 0m0.066s 00:10:06.134 sys 0m0.052s 00:10:06.134 10:15:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.134 10:15:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:06.134 ************************************ 00:10:06.134 END TEST nvme_flexible_data_placement 00:10:06.134 ************************************ 00:10:06.134 00:10:06.134 real 0m7.824s 00:10:06.134 user 0m1.110s 00:10:06.134 sys 0m1.460s 00:10:06.134 10:15:45 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.134 ************************************ 00:10:06.134 END TEST nvme_fdp 00:10:06.134 ************************************ 00:10:06.134 10:15:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:06.134 10:15:45 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:06.134 10:15:45 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:06.134 10:15:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.134 10:15:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.134 10:15:45 -- common/autotest_common.sh@10 -- # set +x 00:10:06.134 ************************************ 00:10:06.134 START TEST nvme_rpc 00:10:06.134 ************************************ 00:10:06.134 10:15:45 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:06.394 * Looking for test storage... 00:10:06.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.394 10:15:45 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.394 --rc genhtml_branch_coverage=1 00:10:06.394 --rc genhtml_function_coverage=1 00:10:06.394 --rc genhtml_legend=1 00:10:06.394 --rc geninfo_all_blocks=1 00:10:06.394 --rc geninfo_unexecuted_blocks=1 00:10:06.394 00:10:06.394 ' 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.394 --rc genhtml_branch_coverage=1 00:10:06.394 --rc genhtml_function_coverage=1 00:10:06.394 --rc genhtml_legend=1 00:10:06.394 --rc geninfo_all_blocks=1 00:10:06.394 --rc geninfo_unexecuted_blocks=1 00:10:06.394 00:10:06.394 ' 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.394 --rc genhtml_branch_coverage=1 00:10:06.394 --rc genhtml_function_coverage=1 00:10:06.394 --rc genhtml_legend=1 00:10:06.394 --rc geninfo_all_blocks=1 00:10:06.394 --rc geninfo_unexecuted_blocks=1 00:10:06.394 00:10:06.394 ' 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.394 --rc genhtml_branch_coverage=1 00:10:06.394 --rc genhtml_function_coverage=1 00:10:06.394 --rc genhtml_legend=1 00:10:06.394 --rc geninfo_all_blocks=1 00:10:06.394 --rc geninfo_unexecuted_blocks=1 00:10:06.394 00:10:06.394 ' 00:10:06.394 10:15:45 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:06.394 10:15:45 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:06.394 10:15:45 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:06.394 10:15:45 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:06.394 10:15:45 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77222 00:10:06.394 10:15:45 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:06.394 10:15:45 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77222 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77222 ']' 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.394 10:15:45 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.394 [2024-11-29 10:15:45.847203] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:10:06.394 [2024-11-29 10:15:45.847315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77222 ] 00:10:06.655 [2024-11-29 10:15:45.991464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:06.655 [2024-11-29 10:15:46.011416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.655 [2024-11-29 10:15:46.011501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.227 10:15:46 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.227 10:15:46 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:07.227 10:15:46 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:07.487 Nvme0n1 00:10:07.487 10:15:46 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:07.487 10:15:46 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:07.747 request: 00:10:07.747 { 00:10:07.747 "bdev_name": "Nvme0n1", 00:10:07.747 "filename": "non_existing_file", 00:10:07.747 "method": "bdev_nvme_apply_firmware", 00:10:07.747 "req_id": 1 00:10:07.747 } 00:10:07.747 Got JSON-RPC error response 00:10:07.747 response: 00:10:07.747 { 00:10:07.747 "code": -32603, 00:10:07.747 "message": "open file failed." 00:10:07.747 } 00:10:07.747 10:15:47 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:07.747 10:15:47 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:07.747 10:15:47 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:08.006 10:15:47 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:08.006 10:15:47 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77222 00:10:08.006 10:15:47 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77222 ']' 00:10:08.006 10:15:47 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77222 00:10:08.006 10:15:47 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:08.006 10:15:47 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.006 10:15:47 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77222 00:10:08.006 10:15:47 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.006 10:15:47 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.006 killing process with pid 77222 00:10:08.006 10:15:47 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77222' 00:10:08.006 10:15:47 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77222 00:10:08.006 10:15:47 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77222 00:10:08.266 00:10:08.266 real 0m2.063s 00:10:08.266 user 0m4.036s 00:10:08.266 sys 0m0.480s 00:10:08.266 10:15:47 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.266 10:15:47 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.266 ************************************ 00:10:08.266 END TEST nvme_rpc 00:10:08.266 ************************************ 00:10:08.266 10:15:47 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:08.266 10:15:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.266 10:15:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.266 10:15:47 -- common/autotest_common.sh@10 -- # set +x 00:10:08.266 ************************************ 00:10:08.266 START TEST nvme_rpc_timeouts 00:10:08.266 ************************************ 00:10:08.266 10:15:47 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:08.527 * Looking for test storage... 00:10:08.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:08.527 10:15:47 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.527 10:15:47 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.527 10:15:47 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.527 10:15:47 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.527 10:15:47 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:08.527 10:15:47 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.527 10:15:47 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.527 --rc genhtml_branch_coverage=1 00:10:08.527 --rc genhtml_function_coverage=1 00:10:08.527 --rc genhtml_legend=1 00:10:08.527 --rc geninfo_all_blocks=1 00:10:08.527 --rc geninfo_unexecuted_blocks=1 00:10:08.527 00:10:08.527 ' 00:10:08.527 10:15:47 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.527 --rc genhtml_branch_coverage=1 00:10:08.527 --rc genhtml_function_coverage=1 00:10:08.527 --rc genhtml_legend=1 00:10:08.527 --rc geninfo_all_blocks=1 00:10:08.527 --rc geninfo_unexecuted_blocks=1 00:10:08.527 00:10:08.527 ' 00:10:08.527 10:15:47 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.527 --rc genhtml_branch_coverage=1 00:10:08.527 --rc genhtml_function_coverage=1 00:10:08.527 --rc genhtml_legend=1 00:10:08.527 --rc geninfo_all_blocks=1 00:10:08.527 --rc geninfo_unexecuted_blocks=1 00:10:08.527 00:10:08.527 ' 00:10:08.527 10:15:47 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.527 --rc genhtml_branch_coverage=1 00:10:08.527 --rc genhtml_function_coverage=1 00:10:08.527 --rc genhtml_legend=1 00:10:08.527 --rc geninfo_all_blocks=1 00:10:08.527 --rc geninfo_unexecuted_blocks=1 00:10:08.527 00:10:08.527 ' 00:10:08.527 10:15:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:08.527 10:15:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77276 00:10:08.527 10:15:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77276 00:10:08.527 10:15:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77308 00:10:08.527 10:15:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:08.527 10:15:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77308 00:10:08.527 10:15:47 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77308 ']' 00:10:08.527 10:15:47 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.527 10:15:47 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.528 10:15:47 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.528 10:15:47 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.528 10:15:47 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:08.528 10:15:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:08.528 [2024-11-29 10:15:47.876506] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:10:08.528 [2024-11-29 10:15:47.876627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77308 ] 00:10:08.786 [2024-11-29 10:15:48.011624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:08.786 [2024-11-29 10:15:48.029509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.786 [2024-11-29 10:15:48.029541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.353 Checking default timeout settings: 00:10:09.353 10:15:48 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.353 10:15:48 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:09.353 10:15:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:09.353 10:15:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:09.612 Making settings changes with rpc: 00:10:09.612 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:09.612 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:09.870 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:09.870 Check default vs. modified settings: 00:10:09.870 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77276 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77276 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:10.129 Setting action_on_timeout is changed as expected. 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77276 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77276 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:10.129 Setting timeout_us is changed as expected. 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77276 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77276 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:10.129 Setting timeout_admin_us is changed as expected. 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77276 /tmp/settings_modified_77276 00:10:10.129 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77308 00:10:10.129 10:15:49 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77308 ']' 00:10:10.129 10:15:49 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77308 00:10:10.129 10:15:49 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:10.129 10:15:49 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.129 10:15:49 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77308 00:10:10.390 10:15:49 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.390 10:15:49 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.390 10:15:49 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77308' 00:10:10.390 killing process with pid 77308 00:10:10.390 10:15:49 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77308 00:10:10.390 10:15:49 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77308 00:10:10.390 RPC TIMEOUT SETTING TEST PASSED. 00:10:10.390 10:15:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:10.390 00:10:10.390 real 0m2.157s 00:10:10.390 user 0m4.409s 00:10:10.390 sys 0m0.424s 00:10:10.390 ************************************ 00:10:10.390 END TEST nvme_rpc_timeouts 00:10:10.390 ************************************ 00:10:10.390 10:15:49 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.390 10:15:49 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:10.652 10:15:49 -- spdk/autotest.sh@239 -- # uname -s 00:10:10.652 10:15:49 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:10.652 10:15:49 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:10.652 10:15:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:10.652 10:15:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.652 10:15:49 -- common/autotest_common.sh@10 -- # set +x 00:10:10.652 ************************************ 00:10:10.652 START TEST sw_hotplug 00:10:10.652 ************************************ 00:10:10.652 10:15:49 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:10.652 * Looking for test storage... 00:10:10.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:10.652 10:15:49 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:10.652 10:15:49 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:10.652 10:15:49 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:10.652 10:15:50 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.652 10:15:50 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:10.652 10:15:50 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.652 10:15:50 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:10.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.652 --rc genhtml_branch_coverage=1 00:10:10.652 --rc genhtml_function_coverage=1 00:10:10.652 --rc genhtml_legend=1 00:10:10.652 --rc geninfo_all_blocks=1 00:10:10.652 --rc geninfo_unexecuted_blocks=1 00:10:10.652 00:10:10.652 ' 00:10:10.652 10:15:50 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:10.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.652 --rc genhtml_branch_coverage=1 00:10:10.652 --rc genhtml_function_coverage=1 00:10:10.652 --rc genhtml_legend=1 00:10:10.652 --rc geninfo_all_blocks=1 00:10:10.652 --rc geninfo_unexecuted_blocks=1 00:10:10.652 00:10:10.652 ' 00:10:10.652 10:15:50 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:10.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.652 --rc genhtml_branch_coverage=1 00:10:10.652 --rc genhtml_function_coverage=1 00:10:10.652 --rc genhtml_legend=1 00:10:10.652 --rc geninfo_all_blocks=1 00:10:10.652 --rc geninfo_unexecuted_blocks=1 00:10:10.652 00:10:10.652 ' 00:10:10.652 10:15:50 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:10.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.652 --rc genhtml_branch_coverage=1 00:10:10.652 --rc genhtml_function_coverage=1 00:10:10.652 --rc genhtml_legend=1 00:10:10.652 --rc geninfo_all_blocks=1 00:10:10.652 --rc geninfo_unexecuted_blocks=1 00:10:10.652 00:10:10.652 ' 00:10:10.652 10:15:50 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:10.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:11.189 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:11.189 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:11.189 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:11.190 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:11.190 10:15:50 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:11.190 10:15:50 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:11.190 10:15:50 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:11.190 10:15:50 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:11.190 10:15:50 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:11.190 10:15:50 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:11.190 10:15:50 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:11.190 10:15:50 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:11.452 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:11.713 Waiting for block devices as requested 00:10:11.713 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.713 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.974 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.974 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:17.267 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:17.267 10:15:56 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:17.267 10:15:56 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:17.529 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:17.530 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:17.530 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:17.791 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:18.053 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:18.053 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:18.053 10:15:57 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:18.053 10:15:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:18.323 10:15:57 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:18.323 10:15:57 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:18.323 10:15:57 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78153 00:10:18.323 10:15:57 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:18.323 10:15:57 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:18.323 10:15:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:18.323 10:15:57 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:18.323 10:15:57 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:18.323 10:15:57 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:18.323 10:15:57 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:18.323 10:15:57 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:18.323 10:15:57 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:18.323 10:15:57 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:18.323 10:15:57 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:18.323 10:15:57 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:18.323 10:15:57 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:18.323 10:15:57 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:18.591 Initializing NVMe Controllers 00:10:18.591 Attaching to 0000:00:10.0 00:10:18.591 Attaching to 0000:00:11.0 00:10:18.591 Attached to 0000:00:10.0 00:10:18.591 Attached to 0000:00:11.0 00:10:18.591 Initialization complete. Starting I/O... 00:10:18.591 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:18.591 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:18.591 00:10:19.535 QEMU NVMe Ctrl (12340 ): 2452 I/Os completed (+2452) 00:10:19.535 QEMU NVMe Ctrl (12341 ): 2452 I/Os completed (+2452) 00:10:19.535 00:10:20.478 QEMU NVMe Ctrl (12340 ): 5564 I/Os completed (+3112) 00:10:20.478 QEMU NVMe Ctrl (12341 ): 5564 I/Os completed (+3112) 00:10:20.478 00:10:21.423 QEMU NVMe Ctrl (12340 ): 8684 I/Os completed (+3120) 00:10:21.423 QEMU NVMe Ctrl (12341 ): 8684 I/Os completed (+3120) 00:10:21.423 00:10:22.366 QEMU NVMe Ctrl (12340 ): 11828 I/Os completed (+3144) 00:10:22.366 QEMU NVMe Ctrl (12341 ): 11828 I/Os completed (+3144) 00:10:22.366 00:10:23.747 QEMU NVMe Ctrl (12340 ): 14924 I/Os completed (+3096) 00:10:23.747 QEMU NVMe Ctrl (12341 ): 14924 I/Os completed (+3096) 00:10:23.747 00:10:24.311 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:24.311 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.311 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.311 [2024-11-29 10:16:03.608819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:24.311 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:24.311 [2024-11-29 10:16:03.609613] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.311 [2024-11-29 10:16:03.609649] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.311 [2024-11-29 10:16:03.609661] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.311 [2024-11-29 10:16:03.609674] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.311 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:24.311 [2024-11-29 10:16:03.610997] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.312 [2024-11-29 10:16:03.611095] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.312 [2024-11-29 10:16:03.611122] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.312 [2024-11-29 10:16:03.611173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.312 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.312 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.312 [2024-11-29 10:16:03.631076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:24.312 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:24.312 [2024-11-29 10:16:03.631913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.312 [2024-11-29 10:16:03.631943] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.312 [2024-11-29 10:16:03.631956] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.312 [2024-11-29 10:16:03.631967] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.312 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:24.312 [2024-11-29 10:16:03.632780] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.312 [2024-11-29 10:16:03.632895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.312 [2024-11-29 10:16:03.632927] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.312 [2024-11-29 10:16:03.632986] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.312 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:24.312 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:24.312 EAL: Scan for (pci) bus failed. 00:10:24.312 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:24.312 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.312 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.312 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:24.569 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:24.569 00:10:24.569 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.569 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.569 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.569 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:24.569 Attaching to 0000:00:10.0 00:10:24.569 Attached to 0000:00:10.0 00:10:24.569 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:24.569 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.569 10:16:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:24.569 Attaching to 0000:00:11.0 00:10:24.569 Attached to 0000:00:11.0 00:10:25.502 QEMU NVMe Ctrl (12340 ): 4142 I/Os completed (+4142) 00:10:25.502 QEMU NVMe Ctrl (12341 ): 4487 I/Os completed (+4487) 00:10:25.502 00:10:26.446 QEMU NVMe Ctrl (12340 ): 7982 I/Os completed (+3840) 00:10:26.446 QEMU NVMe Ctrl (12341 ): 8935 I/Os completed (+4448) 00:10:26.446 00:10:27.410 QEMU NVMe Ctrl (12340 ): 11647 I/Os completed (+3665) 00:10:27.410 QEMU NVMe Ctrl (12341 ): 12635 I/Os completed (+3700) 00:10:27.410 00:10:28.399 QEMU NVMe Ctrl (12340 ): 14910 I/Os completed (+3263) 00:10:28.399 QEMU NVMe Ctrl (12341 ): 15915 I/Os completed (+3280) 00:10:28.399 00:10:29.342 QEMU NVMe Ctrl (12340 ): 18082 I/Os completed (+3172) 00:10:29.342 QEMU NVMe Ctrl (12341 ): 19093 I/Os completed (+3178) 00:10:29.342 00:10:30.717 QEMU NVMe Ctrl (12340 ): 22424 I/Os completed (+4342) 00:10:30.717 QEMU NVMe Ctrl (12341 ): 23407 I/Os completed (+4314) 00:10:30.717 00:10:31.654 QEMU NVMe Ctrl (12340 ): 26773 I/Os completed (+4349) 00:10:31.654 QEMU NVMe Ctrl (12341 ): 27725 I/Os completed (+4318) 00:10:31.654 00:10:32.592 QEMU NVMe Ctrl (12340 ): 31097 I/Os completed (+4324) 00:10:32.592 QEMU NVMe Ctrl (12341 ): 32031 I/Os completed (+4306) 00:10:32.592 00:10:33.539 QEMU NVMe Ctrl (12340 ): 34402 I/Os completed (+3305) 00:10:33.539 QEMU NVMe Ctrl (12341 ): 35416 I/Os completed (+3385) 00:10:33.539 00:10:34.506 QEMU NVMe Ctrl (12340 ): 37793 I/Os completed (+3391) 00:10:34.506 QEMU NVMe Ctrl (12341 ): 38796 I/Os completed (+3380) 00:10:34.506 00:10:35.441 QEMU NVMe Ctrl (12340 ): 42099 I/Os completed (+4306) 00:10:35.441 QEMU NVMe Ctrl (12341 ): 43075 I/Os completed (+4279) 00:10:35.441 00:10:36.379 QEMU NVMe Ctrl (12340 ): 46234 I/Os completed (+4135) 00:10:36.379 QEMU NVMe Ctrl (12341 ): 47189 I/Os completed (+4114) 00:10:36.380 00:10:36.646 10:16:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:36.646 10:16:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:36.646 10:16:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.646 10:16:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.646 [2024-11-29 10:16:15.885446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:36.646 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:36.646 [2024-11-29 10:16:15.887376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.646 [2024-11-29 10:16:15.887613] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.646 [2024-11-29 10:16:15.887755] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.646 [2024-11-29 10:16:15.887892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.646 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:36.646 [2024-11-29 10:16:15.889877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.646 [2024-11-29 10:16:15.889941] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.646 [2024-11-29 10:16:15.889956] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.646 [2024-11-29 10:16:15.889973] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.646 10:16:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.646 10:16:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.646 [2024-11-29 10:16:15.909682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:36.646 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:36.646 [2024-11-29 10:16:15.910873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.646 [2024-11-29 10:16:15.911029] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.646 [2024-11-29 10:16:15.911053] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.646 [2024-11-29 10:16:15.911067] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.646 10:16:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:36.646 10:16:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:36.646 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:36.646 EAL: Scan for (pci) bus failed. 00:10:36.646 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:36.646 [2024-11-29 10:16:15.912399] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.646 [2024-11-29 10:16:15.912438] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.646 [2024-11-29 10:16:15.912458] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.646 [2024-11-29 10:16:15.912471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.646 10:16:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:36.646 10:16:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:36.646 10:16:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:36.646 10:16:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:36.646 10:16:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:36.646 10:16:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:36.646 10:16:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:36.646 10:16:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:36.646 Attaching to 0000:00:10.0 00:10:36.646 Attached to 0000:00:10.0 00:10:36.906 10:16:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:36.906 10:16:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:36.906 10:16:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:36.906 Attaching to 0000:00:11.0 00:10:36.906 Attached to 0000:00:11.0 00:10:37.478 QEMU NVMe Ctrl (12340 ): 2003 I/Os completed (+2003) 00:10:37.478 QEMU NVMe Ctrl (12341 ): 1882 I/Os completed (+1882) 00:10:37.478 00:10:38.425 QEMU NVMe Ctrl (12340 ): 5007 I/Os completed (+3004) 00:10:38.425 QEMU NVMe Ctrl (12341 ): 4895 I/Os completed (+3013) 00:10:38.425 00:10:39.367 QEMU NVMe Ctrl (12340 ): 8043 I/Os completed (+3036) 00:10:39.367 QEMU NVMe Ctrl (12341 ): 7983 I/Os completed (+3088) 00:10:39.367 00:10:40.746 QEMU NVMe Ctrl (12340 ): 12228 I/Os completed (+4185) 00:10:40.746 QEMU NVMe Ctrl (12341 ): 12148 I/Os completed (+4165) 00:10:40.746 00:10:41.758 QEMU NVMe Ctrl (12340 ): 15876 I/Os completed (+3648) 00:10:41.758 QEMU NVMe Ctrl (12341 ): 15814 I/Os completed (+3666) 00:10:41.758 00:10:42.695 QEMU NVMe Ctrl (12340 ): 19489 I/Os completed (+3613) 00:10:42.695 QEMU NVMe Ctrl (12341 ): 19430 I/Os completed (+3616) 00:10:42.695 00:10:43.629 QEMU NVMe Ctrl (12340 ): 23766 I/Os completed (+4277) 00:10:43.629 QEMU NVMe Ctrl (12341 ): 23694 I/Os completed (+4264) 00:10:43.629 00:10:44.564 QEMU NVMe Ctrl (12340 ): 27992 I/Os completed (+4226) 00:10:44.564 QEMU NVMe Ctrl (12341 ): 27933 I/Os completed (+4239) 00:10:44.564 00:10:45.499 QEMU NVMe Ctrl (12340 ): 32242 I/Os completed (+4250) 00:10:45.499 QEMU NVMe Ctrl (12341 ): 32182 I/Os completed (+4249) 00:10:45.499 00:10:46.440 QEMU NVMe Ctrl (12340 ): 36202 I/Os completed (+3960) 00:10:46.440 QEMU NVMe Ctrl (12341 ): 36138 I/Os completed (+3956) 00:10:46.440 00:10:47.385 QEMU NVMe Ctrl (12340 ): 39254 I/Os completed (+3052) 00:10:47.385 QEMU NVMe Ctrl (12341 ): 39193 I/Os completed (+3055) 00:10:47.385 00:10:48.772 QEMU NVMe Ctrl (12340 ): 42374 I/Os completed (+3120) 00:10:48.772 QEMU NVMe Ctrl (12341 ): 42313 I/Os completed (+3120) 00:10:48.772 00:10:48.772 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:48.772 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:48.772 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:48.772 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:48.772 [2024-11-29 10:16:28.132765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:48.772 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:48.772 [2024-11-29 10:16:28.134827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.772 [2024-11-29 10:16:28.135101] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.772 [2024-11-29 10:16:28.135165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.772 [2024-11-29 10:16:28.135287] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.772 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:48.772 [2024-11-29 10:16:28.137673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.772 [2024-11-29 10:16:28.137791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.772 [2024-11-29 10:16:28.137882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.772 [2024-11-29 10:16:28.137933] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.772 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:48.772 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:48.772 [2024-11-29 10:16:28.161728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:48.772 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:48.772 [2024-11-29 10:16:28.163511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.772 [2024-11-29 10:16:28.163616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.772 [2024-11-29 10:16:28.163674] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.772 [2024-11-29 10:16:28.163738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.772 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:48.772 [2024-11-29 10:16:28.165715] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.772 [2024-11-29 10:16:28.165773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.772 [2024-11-29 10:16:28.165794] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.772 [2024-11-29 10:16:28.165831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.772 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:48.772 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:49.035 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:49.035 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:49.035 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:49.035 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:49.035 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:49.035 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:49.035 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:49.035 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:49.035 Attaching to 0000:00:10.0 00:10:49.035 Attached to 0000:00:10.0 00:10:49.297 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:49.297 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:49.297 10:16:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:49.297 Attaching to 0000:00:11.0 00:10:49.297 Attached to 0000:00:11.0 00:10:49.297 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:49.297 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:49.297 [2024-11-29 10:16:28.536854] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:01.535 10:16:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:01.535 10:16:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:01.535 10:16:40 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.93 00:11:01.535 10:16:40 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.93 00:11:01.535 10:16:40 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:01.535 10:16:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.93 00:11:01.535 10:16:40 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.93 2 00:11:01.535 remove_attach_helper took 42.93s to complete (handling 2 nvme drive(s)) 10:16:40 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:08.140 10:16:46 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78153 00:11:08.140 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78153) - No such process 00:11:08.140 10:16:46 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78153 00:11:08.140 10:16:46 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:08.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.140 10:16:46 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:08.140 10:16:46 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:08.140 10:16:46 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=78705 00:11:08.140 10:16:46 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:08.140 10:16:46 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 78705 00:11:08.140 10:16:46 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 78705 ']' 00:11:08.140 10:16:46 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:08.140 10:16:46 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.140 10:16:46 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.140 10:16:46 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.140 10:16:46 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.140 10:16:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.141 [2024-11-29 10:16:46.623026] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:11:08.141 [2024-11-29 10:16:46.623436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78705 ] 00:11:08.141 [2024-11-29 10:16:46.771637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.141 [2024-11-29 10:16:46.800419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.141 10:16:47 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.141 10:16:47 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:08.141 10:16:47 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:08.141 10:16:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.141 10:16:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.141 10:16:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.141 10:16:47 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:08.141 10:16:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:08.141 10:16:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:08.141 10:16:47 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:08.141 10:16:47 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:08.141 10:16:47 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:08.141 10:16:47 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:08.141 10:16:47 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:08.141 10:16:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:08.141 10:16:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:08.141 10:16:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:08.141 10:16:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:08.141 10:16:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:14.715 10:16:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:14.715 10:16:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.715 10:16:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.715 10:16:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.715 10:16:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.715 10:16:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:14.715 10:16:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:14.715 10:16:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:14.715 10:16:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.715 10:16:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.715 10:16:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.715 10:16:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.715 10:16:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.715 10:16:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.715 10:16:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:14.715 10:16:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:14.715 [2024-11-29 10:16:53.594429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:14.715 [2024-11-29 10:16:53.595505] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.715 [2024-11-29 10:16:53.595536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.715 [2024-11-29 10:16:53.595549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.715 [2024-11-29 10:16:53.595562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.715 [2024-11-29 10:16:53.595570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.715 [2024-11-29 10:16:53.595577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.715 [2024-11-29 10:16:53.595586] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.715 [2024-11-29 10:16:53.595592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.715 [2024-11-29 10:16:53.595601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.715 [2024-11-29 10:16:53.595607] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.715 [2024-11-29 10:16:53.595615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.715 [2024-11-29 10:16:53.595621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.715 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:14.715 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:14.715 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:14.715 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.715 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.715 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.715 10:16:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.715 10:16:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.715 [2024-11-29 10:16:54.094432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:14.715 [2024-11-29 10:16:54.095519] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.715 [2024-11-29 10:16:54.095548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.715 [2024-11-29 10:16:54.095557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.715 [2024-11-29 10:16:54.095568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.715 [2024-11-29 10:16:54.095575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.715 [2024-11-29 10:16:54.095583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.715 [2024-11-29 10:16:54.095589] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.715 [2024-11-29 10:16:54.095597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.715 [2024-11-29 10:16:54.095604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.715 [2024-11-29 10:16:54.095614] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.715 [2024-11-29 10:16:54.095620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.715 [2024-11-29 10:16:54.095628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.715 10:16:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.715 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:14.715 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:15.283 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:15.283 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:15.283 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:15.283 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.283 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.283 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.283 10:16:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.283 10:16:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.283 10:16:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.283 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:15.283 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:15.283 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:15.283 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:15.283 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:15.541 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:15.541 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:15.541 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:15.541 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:15.541 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:15.541 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:15.541 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:15.541 10:16:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:27.744 10:17:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.744 10:17:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:27.744 10:17:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:27.744 10:17:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.744 10:17:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:27.744 10:17:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:27.744 [2024-11-29 10:17:06.994607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:27.744 10:17:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:27.744 [2024-11-29 10:17:06.995648] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.744 [2024-11-29 10:17:06.995682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.744 [2024-11-29 10:17:06.995694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.744 [2024-11-29 10:17:06.995707] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.744 [2024-11-29 10:17:06.995715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.744 [2024-11-29 10:17:06.995722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.744 [2024-11-29 10:17:06.995730] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.745 [2024-11-29 10:17:06.995736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.745 [2024-11-29 10:17:06.995744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.745 [2024-11-29 10:17:06.995750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.745 [2024-11-29 10:17:06.995759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.745 [2024-11-29 10:17:06.995766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.003 [2024-11-29 10:17:07.394614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:28.003 [2024-11-29 10:17:07.395634] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.003 [2024-11-29 10:17:07.395663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.003 [2024-11-29 10:17:07.395672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.003 [2024-11-29 10:17:07.395683] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.003 [2024-11-29 10:17:07.395690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.003 [2024-11-29 10:17:07.395698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.003 [2024-11-29 10:17:07.395704] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.003 [2024-11-29 10:17:07.395712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.003 [2024-11-29 10:17:07.395718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.003 [2024-11-29 10:17:07.395726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.003 [2024-11-29 10:17:07.395732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.003 [2024-11-29 10:17:07.395740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.262 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:28.262 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:28.262 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:28.262 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:28.262 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:28.262 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.262 10:17:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.262 10:17:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.262 10:17:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.262 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:28.262 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:28.262 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:28.262 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:28.262 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:28.262 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:28.262 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:28.262 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:28.262 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:28.262 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:28.520 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:28.520 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:28.520 10:17:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.724 10:17:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.724 10:17:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.724 10:17:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.724 10:17:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.724 10:17:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.724 10:17:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:40.724 10:17:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:40.724 [2024-11-29 10:17:19.894837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:40.724 [2024-11-29 10:17:19.895887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.724 [2024-11-29 10:17:19.895919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.724 [2024-11-29 10:17:19.895935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.724 [2024-11-29 10:17:19.895947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.724 [2024-11-29 10:17:19.895955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.724 [2024-11-29 10:17:19.895963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.724 [2024-11-29 10:17:19.895971] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.724 [2024-11-29 10:17:19.895978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.724 [2024-11-29 10:17:19.895986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.724 [2024-11-29 10:17:19.895992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.724 [2024-11-29 10:17:19.896000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.724 [2024-11-29 10:17:19.896006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.983 10:17:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:40.983 10:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:40.983 10:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:40.983 10:17:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.983 10:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.983 10:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.983 10:17:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.983 10:17:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.983 10:17:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.983 10:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:40.983 10:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:41.242 [2024-11-29 10:17:20.494832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:41.242 [2024-11-29 10:17:20.495844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.242 [2024-11-29 10:17:20.495875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.242 [2024-11-29 10:17:20.495884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.242 [2024-11-29 10:17:20.495896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.242 [2024-11-29 10:17:20.495903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.242 [2024-11-29 10:17:20.495913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.242 [2024-11-29 10:17:20.495919] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.242 [2024-11-29 10:17:20.495927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.242 [2024-11-29 10:17:20.495934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.242 [2024-11-29 10:17:20.495942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.242 [2024-11-29 10:17:20.495948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.242 [2024-11-29 10:17:20.495956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.501 10:17:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:41.501 10:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:41.501 10:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:41.501 10:17:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:41.501 10:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:41.501 10:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:41.501 10:17:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.501 10:17:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.501 10:17:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.501 10:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:41.501 10:17:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:41.760 10:17:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:41.760 10:17:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:41.760 10:17:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:41.760 10:17:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:41.760 10:17:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:41.760 10:17:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:41.760 10:17:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:41.760 10:17:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:41.760 10:17:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:41.760 10:17:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:41.760 10:17:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.74 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.74 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.74 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.74 2 00:11:53.975 remove_attach_helper took 45.74s to complete (handling 2 nvme drive(s)) 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:53.975 10:17:33 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:53.975 10:17:33 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:00.632 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:00.632 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.632 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.632 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.632 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.632 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:00.632 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.632 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.632 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.632 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.632 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.632 10:17:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.632 10:17:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.632 10:17:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.632 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:00.632 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:00.632 [2024-11-29 10:17:39.369522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:00.632 [2024-11-29 10:17:39.370346] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.632 [2024-11-29 10:17:39.370374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.633 [2024-11-29 10:17:39.370386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.633 [2024-11-29 10:17:39.370398] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.633 [2024-11-29 10:17:39.370407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.633 [2024-11-29 10:17:39.370414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.633 [2024-11-29 10:17:39.370421] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.633 [2024-11-29 10:17:39.370428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.633 [2024-11-29 10:17:39.370439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.633 [2024-11-29 10:17:39.370445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.633 [2024-11-29 10:17:39.370455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.633 [2024-11-29 10:17:39.370461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.633 [2024-11-29 10:17:39.769527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:00.633 [2024-11-29 10:17:39.770254] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.633 [2024-11-29 10:17:39.770281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.633 [2024-11-29 10:17:39.770290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.633 [2024-11-29 10:17:39.770300] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.633 [2024-11-29 10:17:39.770307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.633 [2024-11-29 10:17:39.770316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.633 [2024-11-29 10:17:39.770322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.633 [2024-11-29 10:17:39.770330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.633 [2024-11-29 10:17:39.770337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.633 [2024-11-29 10:17:39.770344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.633 [2024-11-29 10:17:39.770351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.633 [2024-11-29 10:17:39.770360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.633 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:00.633 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.633 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.633 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.633 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.633 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.633 10:17:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.633 10:17:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.633 10:17:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.633 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:00.633 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:00.633 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.633 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.633 10:17:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:00.633 10:17:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:00.633 10:17:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:00.633 10:17:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.633 10:17:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.633 10:17:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:00.892 10:17:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:00.892 10:17:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:00.892 10:17:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.094 10:17:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.094 10:17:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.094 10:17:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:13.094 [2024-11-29 10:17:52.169729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:13.094 [2024-11-29 10:17:52.170594] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.094 [2024-11-29 10:17:52.170625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.094 [2024-11-29 10:17:52.170637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.094 [2024-11-29 10:17:52.170649] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.094 [2024-11-29 10:17:52.170658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.094 [2024-11-29 10:17:52.170665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.094 [2024-11-29 10:17:52.170672] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.094 [2024-11-29 10:17:52.170679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.094 [2024-11-29 10:17:52.170689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.094 [2024-11-29 10:17:52.170695] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.094 [2024-11-29 10:17:52.170703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.094 [2024-11-29 10:17:52.170709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.094 10:17:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.094 10:17:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.094 10:17:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:13.094 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:13.353 [2024-11-29 10:17:52.669733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:13.353 [2024-11-29 10:17:52.670452] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.353 [2024-11-29 10:17:52.670484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.353 [2024-11-29 10:17:52.670493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.353 [2024-11-29 10:17:52.670505] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.353 [2024-11-29 10:17:52.670511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.353 [2024-11-29 10:17:52.670519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.353 [2024-11-29 10:17:52.670525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.353 [2024-11-29 10:17:52.670533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.353 [2024-11-29 10:17:52.670540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.353 [2024-11-29 10:17:52.670547] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.353 [2024-11-29 10:17:52.670553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.353 [2024-11-29 10:17:52.670561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.353 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:13.353 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:13.353 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:13.353 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.353 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.353 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.353 10:17:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.353 10:17:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.353 10:17:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.353 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:13.353 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:13.612 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:13.612 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:13.612 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:13.612 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:13.612 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:13.612 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:13.612 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:13.612 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:13.612 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:13.612 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:13.612 10:17:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:25.815 10:18:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:25.815 10:18:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:25.815 10:18:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:25.815 10:18:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:25.815 10:18:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:25.815 10:18:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:25.815 10:18:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.815 10:18:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:25.815 10:18:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.815 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:25.815 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:25.815 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:25.815 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:25.815 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:25.815 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:25.815 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:25.815 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:25.815 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:25.815 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:25.815 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:25.815 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:25.815 10:18:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.815 10:18:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:25.815 [2024-11-29 10:18:05.069956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:25.815 [2024-11-29 10:18:05.070697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.815 [2024-11-29 10:18:05.070727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.815 [2024-11-29 10:18:05.070739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.815 [2024-11-29 10:18:05.070750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.815 [2024-11-29 10:18:05.070760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.815 [2024-11-29 10:18:05.070767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.815 [2024-11-29 10:18:05.070775] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.815 [2024-11-29 10:18:05.070782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.815 [2024-11-29 10:18:05.070790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.815 [2024-11-29 10:18:05.070796] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.815 [2024-11-29 10:18:05.070815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.815 [2024-11-29 10:18:05.070822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.815 10:18:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.815 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:25.815 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:26.073 [2024-11-29 10:18:05.469963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:26.073 [2024-11-29 10:18:05.470671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.073 [2024-11-29 10:18:05.470702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.073 [2024-11-29 10:18:05.470711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.073 [2024-11-29 10:18:05.470721] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.073 [2024-11-29 10:18:05.470728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.073 [2024-11-29 10:18:05.470737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.073 [2024-11-29 10:18:05.470743] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.073 [2024-11-29 10:18:05.470753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.073 [2024-11-29 10:18:05.470759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.073 [2024-11-29 10:18:05.470767] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.073 [2024-11-29 10:18:05.470773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.073 [2024-11-29 10:18:05.470780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.331 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:26.331 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:26.331 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:26.331 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:26.331 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:26.331 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:26.331 10:18:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.331 10:18:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.331 10:18:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.331 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:26.331 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:26.331 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:26.331 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:26.331 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:26.331 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:26.331 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:26.331 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:26.331 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:26.331 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:26.590 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:26.590 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:26.590 10:18:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:38.796 10:18:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:38.796 10:18:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:38.796 10:18:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:38.796 10:18:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:38.796 10:18:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:38.796 10:18:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:38.796 10:18:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.796 10:18:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:38.796 10:18:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.796 10:18:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:38.796 10:18:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:38.796 10:18:17 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.63 00:12:38.796 10:18:17 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.63 00:12:38.796 10:18:17 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:38.796 10:18:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.63 00:12:38.796 10:18:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.63 2 00:12:38.796 remove_attach_helper took 44.63s to complete (handling 2 nvme drive(s)) 10:18:17 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:38.796 10:18:17 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 78705 00:12:38.796 10:18:17 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 78705 ']' 00:12:38.796 10:18:17 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 78705 00:12:38.796 10:18:17 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:38.796 10:18:17 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.796 10:18:17 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78705 00:12:38.796 killing process with pid 78705 00:12:38.796 10:18:17 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.796 10:18:17 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.796 10:18:17 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78705' 00:12:38.796 10:18:17 sw_hotplug -- common/autotest_common.sh@973 -- # kill 78705 00:12:38.796 10:18:17 sw_hotplug -- common/autotest_common.sh@978 -- # wait 78705 00:12:38.796 10:18:18 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:39.058 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:39.632 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:39.632 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:39.632 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:39.632 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:39.894 00:12:39.894 real 2m29.259s 00:12:39.894 user 1m49.302s 00:12:39.894 sys 0m18.461s 00:12:39.894 10:18:19 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.894 ************************************ 00:12:39.894 END TEST sw_hotplug 00:12:39.894 ************************************ 00:12:39.894 10:18:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.894 10:18:19 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:39.894 10:18:19 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:39.894 10:18:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:39.894 10:18:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.894 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:12:39.894 ************************************ 00:12:39.894 START TEST nvme_xnvme 00:12:39.894 ************************************ 00:12:39.894 10:18:19 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:39.894 * Looking for test storage... 00:12:39.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:39.895 10:18:19 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:39.895 10:18:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:39.895 10:18:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:40.160 10:18:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.160 10:18:19 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:40.160 10:18:19 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.160 10:18:19 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:40.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.160 --rc genhtml_branch_coverage=1 00:12:40.160 --rc genhtml_function_coverage=1 00:12:40.160 --rc genhtml_legend=1 00:12:40.160 --rc geninfo_all_blocks=1 00:12:40.160 --rc geninfo_unexecuted_blocks=1 00:12:40.160 00:12:40.160 ' 00:12:40.160 10:18:19 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:40.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.160 --rc genhtml_branch_coverage=1 00:12:40.160 --rc genhtml_function_coverage=1 00:12:40.160 --rc genhtml_legend=1 00:12:40.160 --rc geninfo_all_blocks=1 00:12:40.160 --rc geninfo_unexecuted_blocks=1 00:12:40.160 00:12:40.160 ' 00:12:40.160 10:18:19 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:40.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.160 --rc genhtml_branch_coverage=1 00:12:40.160 --rc genhtml_function_coverage=1 00:12:40.160 --rc genhtml_legend=1 00:12:40.160 --rc geninfo_all_blocks=1 00:12:40.160 --rc geninfo_unexecuted_blocks=1 00:12:40.160 00:12:40.160 ' 00:12:40.160 10:18:19 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:40.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.160 --rc genhtml_branch_coverage=1 00:12:40.160 --rc genhtml_function_coverage=1 00:12:40.160 --rc genhtml_legend=1 00:12:40.160 --rc geninfo_all_blocks=1 00:12:40.160 --rc geninfo_unexecuted_blocks=1 00:12:40.160 00:12:40.160 ' 00:12:40.160 10:18:19 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:40.160 10:18:19 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:40.160 10:18:19 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:40.161 10:18:19 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:40.161 10:18:19 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:40.161 10:18:19 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:40.161 10:18:19 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:40.161 10:18:19 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:40.161 10:18:19 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:40.161 10:18:19 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:40.161 10:18:19 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:40.161 10:18:19 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:40.161 10:18:19 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:40.161 10:18:19 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:40.161 10:18:19 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:40.161 10:18:19 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:40.161 10:18:19 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:40.161 10:18:19 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:40.161 10:18:19 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:40.161 10:18:19 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:40.161 10:18:19 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:40.161 10:18:19 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:40.161 10:18:19 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:40.161 10:18:19 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:40.161 10:18:19 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:40.161 10:18:19 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:40.161 10:18:19 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:40.161 #define SPDK_CONFIG_H 00:12:40.161 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:40.161 #define SPDK_CONFIG_APPS 1 00:12:40.161 #define SPDK_CONFIG_ARCH native 00:12:40.161 #define SPDK_CONFIG_ASAN 1 00:12:40.161 #undef SPDK_CONFIG_AVAHI 00:12:40.161 #undef SPDK_CONFIG_CET 00:12:40.161 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:40.161 #define SPDK_CONFIG_COVERAGE 1 00:12:40.161 #define SPDK_CONFIG_CROSS_PREFIX 00:12:40.161 #undef SPDK_CONFIG_CRYPTO 00:12:40.161 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:40.161 #undef SPDK_CONFIG_CUSTOMOCF 00:12:40.161 #undef SPDK_CONFIG_DAOS 00:12:40.161 #define SPDK_CONFIG_DAOS_DIR 00:12:40.161 #define SPDK_CONFIG_DEBUG 1 00:12:40.161 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:40.161 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:12:40.161 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:12:40.161 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:12:40.161 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:40.161 #undef SPDK_CONFIG_DPDK_UADK 00:12:40.161 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:40.161 #define SPDK_CONFIG_EXAMPLES 1 00:12:40.161 #undef SPDK_CONFIG_FC 00:12:40.161 #define SPDK_CONFIG_FC_PATH 00:12:40.161 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:40.161 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:40.161 #define SPDK_CONFIG_FSDEV 1 00:12:40.161 #undef SPDK_CONFIG_FUSE 00:12:40.162 #undef SPDK_CONFIG_FUZZER 00:12:40.162 #define SPDK_CONFIG_FUZZER_LIB 00:12:40.162 #undef SPDK_CONFIG_GOLANG 00:12:40.162 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:40.162 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:40.162 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:40.162 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:40.162 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:40.162 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:40.162 #undef SPDK_CONFIG_HAVE_LZ4 00:12:40.162 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:40.162 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:40.162 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:40.162 #define SPDK_CONFIG_IDXD 1 00:12:40.162 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:40.162 #undef SPDK_CONFIG_IPSEC_MB 00:12:40.162 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:40.162 #define SPDK_CONFIG_ISAL 1 00:12:40.162 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:40.162 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:40.162 #define SPDK_CONFIG_LIBDIR 00:12:40.162 #undef SPDK_CONFIG_LTO 00:12:40.162 #define SPDK_CONFIG_MAX_LCORES 128 00:12:40.162 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:40.162 #define SPDK_CONFIG_NVME_CUSE 1 00:12:40.162 #undef SPDK_CONFIG_OCF 00:12:40.162 #define SPDK_CONFIG_OCF_PATH 00:12:40.162 #define SPDK_CONFIG_OPENSSL_PATH 00:12:40.162 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:40.162 #define SPDK_CONFIG_PGO_DIR 00:12:40.162 #undef SPDK_CONFIG_PGO_USE 00:12:40.162 #define SPDK_CONFIG_PREFIX /usr/local 00:12:40.162 #undef SPDK_CONFIG_RAID5F 00:12:40.162 #undef SPDK_CONFIG_RBD 00:12:40.162 #define SPDK_CONFIG_RDMA 1 00:12:40.162 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:40.162 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:40.162 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:40.162 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:40.162 #define SPDK_CONFIG_SHARED 1 00:12:40.162 #undef SPDK_CONFIG_SMA 00:12:40.162 #define SPDK_CONFIG_TESTS 1 00:12:40.162 #undef SPDK_CONFIG_TSAN 00:12:40.162 #define SPDK_CONFIG_UBLK 1 00:12:40.162 #define SPDK_CONFIG_UBSAN 1 00:12:40.162 #undef SPDK_CONFIG_UNIT_TESTS 00:12:40.162 #undef SPDK_CONFIG_URING 00:12:40.162 #define SPDK_CONFIG_URING_PATH 00:12:40.162 #undef SPDK_CONFIG_URING_ZNS 00:12:40.162 #undef SPDK_CONFIG_USDT 00:12:40.162 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:40.162 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:40.162 #undef SPDK_CONFIG_VFIO_USER 00:12:40.162 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:40.162 #define SPDK_CONFIG_VHOST 1 00:12:40.162 #define SPDK_CONFIG_VIRTIO 1 00:12:40.162 #undef SPDK_CONFIG_VTUNE 00:12:40.162 #define SPDK_CONFIG_VTUNE_DIR 00:12:40.162 #define SPDK_CONFIG_WERROR 1 00:12:40.162 #define SPDK_CONFIG_WPDK_DIR 00:12:40.162 #define SPDK_CONFIG_XNVME 1 00:12:40.162 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:40.162 10:18:19 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:40.162 10:18:19 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.162 10:18:19 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.162 10:18:19 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.162 10:18:19 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.162 10:18:19 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.162 10:18:19 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.162 10:18:19 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.162 10:18:19 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:40.162 10:18:19 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:40.162 10:18:19 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:40.162 10:18:19 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@140 -- # : v22.11.4 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:40.163 10:18:19 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 80061 ]] 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 80061 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.3BMOdx 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.3BMOdx/tests/xnvme /tmp/spdk.3BMOdx 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13379903488 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6202630144 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261964800 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13379903488 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6202630144 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98511589376 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1191190528 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:40.164 * Looking for test storage... 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13379903488 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:40.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:40.164 10:18:19 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:40.165 10:18:19 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:40.165 10:18:19 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:40.165 10:18:19 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:40.165 10:18:19 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:40.165 10:18:19 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:40.165 10:18:19 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:40.165 10:18:19 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:40.165 10:18:19 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:40.165 10:18:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:40.165 10:18:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:40.165 10:18:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:40.165 10:18:19 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.165 10:18:19 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:40.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.165 --rc genhtml_branch_coverage=1 00:12:40.165 --rc genhtml_function_coverage=1 00:12:40.165 --rc genhtml_legend=1 00:12:40.165 --rc geninfo_all_blocks=1 00:12:40.165 --rc geninfo_unexecuted_blocks=1 00:12:40.165 00:12:40.165 ' 00:12:40.165 10:18:19 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:40.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.165 --rc genhtml_branch_coverage=1 00:12:40.165 --rc genhtml_function_coverage=1 00:12:40.165 --rc genhtml_legend=1 00:12:40.165 --rc geninfo_all_blocks=1 00:12:40.165 --rc geninfo_unexecuted_blocks=1 00:12:40.165 00:12:40.165 ' 00:12:40.165 10:18:19 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:40.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.165 --rc genhtml_branch_coverage=1 00:12:40.165 --rc genhtml_function_coverage=1 00:12:40.165 --rc genhtml_legend=1 00:12:40.165 --rc geninfo_all_blocks=1 00:12:40.165 --rc geninfo_unexecuted_blocks=1 00:12:40.165 00:12:40.165 ' 00:12:40.165 10:18:19 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:40.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.165 --rc genhtml_branch_coverage=1 00:12:40.165 --rc genhtml_function_coverage=1 00:12:40.165 --rc genhtml_legend=1 00:12:40.165 --rc geninfo_all_blocks=1 00:12:40.165 --rc geninfo_unexecuted_blocks=1 00:12:40.165 00:12:40.165 ' 00:12:40.165 10:18:19 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.165 10:18:19 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.165 10:18:19 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.165 10:18:19 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.165 10:18:19 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.165 10:18:19 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:40.165 10:18:19 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:40.165 10:18:19 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:40.427 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:40.689 Waiting for block devices as requested 00:12:40.689 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:40.689 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:40.950 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:40.950 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:46.243 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:46.243 10:18:25 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:46.504 10:18:25 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:46.504 10:18:25 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:46.767 10:18:26 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:46.767 10:18:26 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:46.767 No valid GPT data, bailing 00:12:46.767 10:18:26 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:46.767 10:18:26 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:46.767 10:18:26 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:46.767 10:18:26 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:46.767 10:18:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:46.767 10:18:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.767 10:18:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:46.767 ************************************ 00:12:46.767 START TEST xnvme_rpc 00:12:46.767 ************************************ 00:12:46.767 10:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:46.767 10:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:46.767 10:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:46.767 10:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:46.767 10:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:46.767 10:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80445 00:12:46.767 10:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80445 00:12:46.767 10:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80445 ']' 00:12:46.767 10:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.767 10:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.767 10:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.767 10:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.767 10:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.767 10:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:46.767 [2024-11-29 10:18:26.210955] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:46.767 [2024-11-29 10:18:26.211096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80445 ] 00:12:47.028 [2024-11-29 10:18:26.359811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.028 [2024-11-29 10:18:26.388556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.599 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.599 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:47.599 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:47.599 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.599 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.860 xnvme_bdev 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80445 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80445 ']' 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80445 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80445 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.860 killing process with pid 80445 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80445' 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80445 00:12:47.860 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80445 00:12:48.120 00:12:48.120 real 0m1.428s 00:12:48.120 user 0m1.497s 00:12:48.120 sys 0m0.399s 00:12:48.120 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.120 10:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.120 ************************************ 00:12:48.120 END TEST xnvme_rpc 00:12:48.120 ************************************ 00:12:48.382 10:18:27 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:48.382 10:18:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:48.382 10:18:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.382 10:18:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.382 ************************************ 00:12:48.382 START TEST xnvme_bdevperf 00:12:48.382 ************************************ 00:12:48.382 10:18:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:48.382 10:18:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:48.382 10:18:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:48.382 10:18:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:48.382 10:18:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:48.382 10:18:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:48.382 10:18:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:48.382 10:18:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:48.382 { 00:12:48.382 "subsystems": [ 00:12:48.382 { 00:12:48.382 "subsystem": "bdev", 00:12:48.382 "config": [ 00:12:48.382 { 00:12:48.382 "params": { 00:12:48.382 "io_mechanism": "libaio", 00:12:48.382 "conserve_cpu": false, 00:12:48.382 "filename": "/dev/nvme0n1", 00:12:48.382 "name": "xnvme_bdev" 00:12:48.382 }, 00:12:48.382 "method": "bdev_xnvme_create" 00:12:48.382 }, 00:12:48.382 { 00:12:48.382 "method": "bdev_wait_for_examine" 00:12:48.382 } 00:12:48.382 ] 00:12:48.382 } 00:12:48.382 ] 00:12:48.382 } 00:12:48.382 [2024-11-29 10:18:27.683481] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:48.382 [2024-11-29 10:18:27.683597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80503 ] 00:12:48.382 [2024-11-29 10:18:27.821996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.382 [2024-11-29 10:18:27.840663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.644 Running I/O for 5 seconds... 00:12:50.573 27358.00 IOPS, 106.87 MiB/s [2024-11-29T10:18:30.981Z] 24388.00 IOPS, 95.27 MiB/s [2024-11-29T10:18:32.369Z] 24183.67 IOPS, 94.47 MiB/s [2024-11-29T10:18:32.943Z] 23860.00 IOPS, 93.20 MiB/s [2024-11-29T10:18:33.206Z] 23565.60 IOPS, 92.05 MiB/s 00:12:53.741 Latency(us) 00:12:53.741 [2024-11-29T10:18:33.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.741 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:53.741 xnvme_bdev : 5.01 23525.25 91.90 0.00 0.00 2714.82 475.77 8065.97 00:12:53.741 [2024-11-29T10:18:33.206Z] =================================================================================================================== 00:12:53.741 [2024-11-29T10:18:33.206Z] Total : 23525.25 91.90 0.00 0.00 2714.82 475.77 8065.97 00:12:53.741 10:18:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:53.741 10:18:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:53.741 10:18:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:53.741 10:18:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:53.741 10:18:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:53.741 { 00:12:53.741 "subsystems": [ 00:12:53.741 { 00:12:53.741 "subsystem": "bdev", 00:12:53.741 "config": [ 00:12:53.741 { 00:12:53.741 "params": { 00:12:53.741 "io_mechanism": "libaio", 00:12:53.741 "conserve_cpu": false, 00:12:53.741 "filename": "/dev/nvme0n1", 00:12:53.741 "name": "xnvme_bdev" 00:12:53.741 }, 00:12:53.741 "method": "bdev_xnvme_create" 00:12:53.741 }, 00:12:53.741 { 00:12:53.741 "method": "bdev_wait_for_examine" 00:12:53.741 } 00:12:53.741 ] 00:12:53.741 } 00:12:53.741 ] 00:12:53.741 } 00:12:53.741 [2024-11-29 10:18:33.201677] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:53.741 [2024-11-29 10:18:33.201840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80567 ] 00:12:54.002 [2024-11-29 10:18:33.351327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.002 [2024-11-29 10:18:33.379715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.265 Running I/O for 5 seconds... 00:12:56.152 27806.00 IOPS, 108.62 MiB/s [2024-11-29T10:18:36.561Z] 20427.00 IOPS, 79.79 MiB/s [2024-11-29T10:18:37.946Z] 18506.00 IOPS, 72.29 MiB/s [2024-11-29T10:18:38.518Z] 17687.25 IOPS, 69.09 MiB/s 00:12:59.053 Latency(us) 00:12:59.053 [2024-11-29T10:18:38.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.053 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:59.053 xnvme_bdev : 5.00 20594.73 80.45 0.00 0.00 3102.87 48.44 21576.47 00:12:59.053 [2024-11-29T10:18:38.518Z] =================================================================================================================== 00:12:59.053 [2024-11-29T10:18:38.518Z] Total : 20594.73 80.45 0.00 0.00 3102.87 48.44 21576.47 00:12:59.314 ************************************ 00:12:59.314 END TEST xnvme_bdevperf 00:12:59.314 ************************************ 00:12:59.314 00:12:59.314 real 0m11.094s 00:12:59.314 user 0m4.665s 00:12:59.314 sys 0m5.164s 00:12:59.314 10:18:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.314 10:18:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:59.314 10:18:38 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:59.314 10:18:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:59.314 10:18:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.314 10:18:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:59.575 ************************************ 00:12:59.575 START TEST xnvme_fio_plugin 00:12:59.575 ************************************ 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:59.575 10:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:59.575 { 00:12:59.575 "subsystems": [ 00:12:59.575 { 00:12:59.575 "subsystem": "bdev", 00:12:59.575 "config": [ 00:12:59.575 { 00:12:59.575 "params": { 00:12:59.575 "io_mechanism": "libaio", 00:12:59.575 "conserve_cpu": false, 00:12:59.575 "filename": "/dev/nvme0n1", 00:12:59.575 "name": "xnvme_bdev" 00:12:59.575 }, 00:12:59.575 "method": "bdev_xnvme_create" 00:12:59.575 }, 00:12:59.575 { 00:12:59.575 "method": "bdev_wait_for_examine" 00:12:59.575 } 00:12:59.575 ] 00:12:59.575 } 00:12:59.575 ] 00:12:59.575 } 00:12:59.575 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:59.575 fio-3.35 00:12:59.575 Starting 1 thread 00:13:06.213 00:13:06.213 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=80675: Fri Nov 29 10:18:44 2024 00:13:06.213 read: IOPS=34.0k, BW=133MiB/s (139MB/s)(664MiB/5002msec) 00:13:06.213 slat (usec): min=4, max=2062, avg=20.95, stdev=92.05 00:13:06.213 clat (usec): min=108, max=4671, avg=1320.70, stdev=512.16 00:13:06.213 lat (usec): min=191, max=4813, avg=1341.65, stdev=503.73 00:13:06.213 clat percentiles (usec): 00:13:06.213 | 1.00th=[ 285], 5.00th=[ 537], 10.00th=[ 685], 20.00th=[ 889], 00:13:06.213 | 30.00th=[ 1057], 40.00th=[ 1188], 50.00th=[ 1319], 60.00th=[ 1434], 00:13:06.213 | 70.00th=[ 1549], 80.00th=[ 1696], 90.00th=[ 1926], 95.00th=[ 2180], 00:13:06.213 | 99.00th=[ 2802], 99.50th=[ 3130], 99.90th=[ 3785], 99.95th=[ 3949], 00:13:06.213 | 99.99th=[ 4555] 00:13:06.213 bw ( KiB/s): min=129720, max=142168, per=100.00%, avg=136826.67, stdev=4004.34, samples=9 00:13:06.213 iops : min=32430, max=35542, avg=34206.67, stdev=1001.08, samples=9 00:13:06.213 lat (usec) : 250=0.64%, 500=3.62%, 750=8.58%, 1000=13.92% 00:13:06.213 lat (msec) : 2=65.04%, 4=8.15%, 10=0.04% 00:13:06.213 cpu : usr=43.39%, sys=48.19%, ctx=17, majf=0, minf=1065 00:13:06.213 IO depths : 1=0.5%, 2=1.3%, 4=3.2%, 8=8.7%, 16=23.4%, 32=60.9%, >=64=2.1% 00:13:06.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.213 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:13:06.213 issued rwts: total=169965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:06.213 00:13:06.213 Run status group 0 (all jobs): 00:13:06.213 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=664MiB (696MB), run=5002-5002msec 00:13:06.213 ----------------------------------------------------- 00:13:06.213 Suppressions used: 00:13:06.213 count bytes template 00:13:06.213 1 11 /usr/src/fio/parse.c 00:13:06.213 1 8 libtcmalloc_minimal.so 00:13:06.213 1 904 libcrypto.so 00:13:06.213 ----------------------------------------------------- 00:13:06.213 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:06.213 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:06.214 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:06.214 10:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:06.214 { 00:13:06.214 "subsystems": [ 00:13:06.214 { 00:13:06.214 "subsystem": "bdev", 00:13:06.214 "config": [ 00:13:06.214 { 00:13:06.214 "params": { 00:13:06.214 "io_mechanism": "libaio", 00:13:06.214 "conserve_cpu": false, 00:13:06.214 "filename": "/dev/nvme0n1", 00:13:06.214 "name": "xnvme_bdev" 00:13:06.214 }, 00:13:06.214 "method": "bdev_xnvme_create" 00:13:06.214 }, 00:13:06.214 { 00:13:06.214 "method": "bdev_wait_for_examine" 00:13:06.214 } 00:13:06.214 ] 00:13:06.214 } 00:13:06.214 ] 00:13:06.214 } 00:13:06.214 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:06.214 fio-3.35 00:13:06.214 Starting 1 thread 00:13:11.509 00:13:11.509 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=80761: Fri Nov 29 10:18:50 2024 00:13:11.509 write: IOPS=38.2k, BW=149MiB/s (157MB/s)(747MiB/5001msec); 0 zone resets 00:13:11.509 slat (usec): min=4, max=1802, avg=21.01, stdev=72.82 00:13:11.509 clat (usec): min=107, max=5160, avg=1103.22, stdev=526.97 00:13:11.509 lat (usec): min=176, max=5360, avg=1124.24, stdev=523.48 00:13:11.509 clat percentiles (usec): 00:13:11.509 | 1.00th=[ 235], 5.00th=[ 363], 10.00th=[ 482], 20.00th=[ 660], 00:13:11.509 | 30.00th=[ 807], 40.00th=[ 930], 50.00th=[ 1057], 60.00th=[ 1172], 00:13:11.509 | 70.00th=[ 1303], 80.00th=[ 1467], 90.00th=[ 1762], 95.00th=[ 2040], 00:13:11.509 | 99.00th=[ 2802], 99.50th=[ 3064], 99.90th=[ 3654], 99.95th=[ 3949], 00:13:11.509 | 99.99th=[ 4555] 00:13:11.509 bw ( KiB/s): min=140496, max=158544, per=98.57%, avg=150796.33, stdev=5633.30, samples=9 00:13:11.509 iops : min=35124, max=39636, avg=37699.00, stdev=1408.30, samples=9 00:13:11.509 lat (usec) : 250=1.30%, 500=9.58%, 750=15.06%, 1000=19.77% 00:13:11.509 lat (msec) : 2=48.70%, 4=5.53%, 10=0.05% 00:13:11.509 cpu : usr=34.28%, sys=53.80%, ctx=12, majf=0, minf=1066 00:13:11.509 IO depths : 1=0.3%, 2=0.9%, 4=3.0%, 8=9.2%, 16=24.6%, 32=60.0%, >=64=2.0% 00:13:11.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.509 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:11.509 issued rwts: total=0,191273,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.509 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:11.509 00:13:11.509 Run status group 0 (all jobs): 00:13:11.509 WRITE: bw=149MiB/s (157MB/s), 149MiB/s-149MiB/s (157MB/s-157MB/s), io=747MiB (783MB), run=5001-5001msec 00:13:11.509 ----------------------------------------------------- 00:13:11.509 Suppressions used: 00:13:11.509 count bytes template 00:13:11.509 1 11 /usr/src/fio/parse.c 00:13:11.509 1 8 libtcmalloc_minimal.so 00:13:11.509 1 904 libcrypto.so 00:13:11.509 ----------------------------------------------------- 00:13:11.509 00:13:11.509 00:13:11.509 real 0m12.068s 00:13:11.509 user 0m5.019s 00:13:11.509 sys 0m5.649s 00:13:11.509 ************************************ 00:13:11.509 END TEST xnvme_fio_plugin 00:13:11.509 ************************************ 00:13:11.509 10:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.509 10:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:11.509 10:18:50 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:11.509 10:18:50 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:11.509 10:18:50 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:11.509 10:18:50 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:11.509 10:18:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:11.509 10:18:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.509 10:18:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.509 ************************************ 00:13:11.509 START TEST xnvme_rpc 00:13:11.509 ************************************ 00:13:11.509 10:18:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:11.509 10:18:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:11.509 10:18:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:11.509 10:18:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:11.509 10:18:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:11.509 10:18:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80838 00:13:11.509 10:18:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80838 00:13:11.509 10:18:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80838 ']' 00:13:11.509 10:18:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.509 10:18:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.509 10:18:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:11.509 10:18:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.509 10:18:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.509 10:18:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.771 [2024-11-29 10:18:51.004188] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:11.771 [2024-11-29 10:18:51.004332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80838 ] 00:13:11.771 [2024-11-29 10:18:51.149463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.771 [2024-11-29 10:18:51.178659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.716 xnvme_bdev 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.716 10:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.716 10:18:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:12.716 10:18:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:12.716 10:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.716 10:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.716 10:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.716 10:18:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80838 00:13:12.716 10:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80838 ']' 00:13:12.716 10:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80838 00:13:12.716 10:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:12.716 10:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.716 10:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80838 00:13:12.716 killing process with pid 80838 00:13:12.716 10:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.716 10:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.716 10:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80838' 00:13:12.716 10:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80838 00:13:12.716 10:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80838 00:13:12.978 00:13:12.978 real 0m1.417s 00:13:12.978 user 0m1.509s 00:13:12.978 sys 0m0.386s 00:13:12.978 ************************************ 00:13:12.978 END TEST xnvme_rpc 00:13:12.978 ************************************ 00:13:12.978 10:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.978 10:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.978 10:18:52 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:12.978 10:18:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:12.978 10:18:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.978 10:18:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.978 ************************************ 00:13:12.978 START TEST xnvme_bdevperf 00:13:12.978 ************************************ 00:13:12.978 10:18:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:12.978 10:18:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:12.978 10:18:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:12.978 10:18:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:12.978 10:18:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:12.978 10:18:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:12.978 10:18:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:12.978 10:18:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:12.978 { 00:13:12.978 "subsystems": [ 00:13:12.978 { 00:13:12.978 "subsystem": "bdev", 00:13:12.978 "config": [ 00:13:12.978 { 00:13:12.978 "params": { 00:13:12.978 "io_mechanism": "libaio", 00:13:12.978 "conserve_cpu": true, 00:13:12.978 "filename": "/dev/nvme0n1", 00:13:12.978 "name": "xnvme_bdev" 00:13:12.978 }, 00:13:12.978 "method": "bdev_xnvme_create" 00:13:12.978 }, 00:13:12.978 { 00:13:12.978 "method": "bdev_wait_for_examine" 00:13:12.978 } 00:13:12.978 ] 00:13:12.978 } 00:13:12.978 ] 00:13:12.978 } 00:13:13.239 [2024-11-29 10:18:52.470502] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:13.239 [2024-11-29 10:18:52.470850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80896 ] 00:13:13.239 [2024-11-29 10:18:52.618924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.239 [2024-11-29 10:18:52.647949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.500 Running I/O for 5 seconds... 00:13:15.388 31641.00 IOPS, 123.60 MiB/s [2024-11-29T10:18:55.795Z] 32810.50 IOPS, 128.17 MiB/s [2024-11-29T10:18:57.184Z] 31637.33 IOPS, 123.58 MiB/s [2024-11-29T10:18:58.130Z] 31000.00 IOPS, 121.09 MiB/s 00:13:18.665 Latency(us) 00:13:18.665 [2024-11-29T10:18:58.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.665 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:18.665 xnvme_bdev : 5.00 30937.50 120.85 0.00 0.00 2064.07 186.68 8570.09 00:13:18.665 [2024-11-29T10:18:58.130Z] =================================================================================================================== 00:13:18.665 [2024-11-29T10:18:58.130Z] Total : 30937.50 120.85 0.00 0.00 2064.07 186.68 8570.09 00:13:18.665 10:18:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:18.665 10:18:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:18.665 10:18:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:18.665 10:18:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:18.665 10:18:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:18.665 { 00:13:18.665 "subsystems": [ 00:13:18.665 { 00:13:18.665 "subsystem": "bdev", 00:13:18.665 "config": [ 00:13:18.665 { 00:13:18.665 "params": { 00:13:18.665 "io_mechanism": "libaio", 00:13:18.665 "conserve_cpu": true, 00:13:18.665 "filename": "/dev/nvme0n1", 00:13:18.665 "name": "xnvme_bdev" 00:13:18.665 }, 00:13:18.665 "method": "bdev_xnvme_create" 00:13:18.665 }, 00:13:18.665 { 00:13:18.665 "method": "bdev_wait_for_examine" 00:13:18.665 } 00:13:18.665 ] 00:13:18.665 } 00:13:18.665 ] 00:13:18.665 } 00:13:18.665 [2024-11-29 10:18:58.035336] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:18.665 [2024-11-29 10:18:58.035457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80960 ] 00:13:18.927 [2024-11-29 10:18:58.178368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.927 [2024-11-29 10:18:58.207999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.927 Running I/O for 5 seconds... 00:13:21.259 33342.00 IOPS, 130.24 MiB/s [2024-11-29T10:19:01.665Z] 28960.50 IOPS, 113.13 MiB/s [2024-11-29T10:19:02.608Z] 29946.00 IOPS, 116.98 MiB/s [2024-11-29T10:19:03.551Z] 30924.75 IOPS, 120.80 MiB/s 00:13:24.087 Latency(us) 00:13:24.087 [2024-11-29T10:19:03.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.087 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:24.087 xnvme_bdev : 5.00 31574.64 123.34 0.00 0.00 2022.36 46.08 24601.21 00:13:24.087 [2024-11-29T10:19:03.552Z] =================================================================================================================== 00:13:24.087 [2024-11-29T10:19:03.552Z] Total : 31574.64 123.34 0.00 0.00 2022.36 46.08 24601.21 00:13:24.087 00:13:24.087 real 0m11.121s 00:13:24.087 user 0m3.390s 00:13:24.087 sys 0m6.237s 00:13:24.087 10:19:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.087 ************************************ 00:13:24.087 END TEST xnvme_bdevperf 00:13:24.087 ************************************ 00:13:24.087 10:19:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:24.385 10:19:03 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:24.385 10:19:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:24.385 10:19:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.385 10:19:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.385 ************************************ 00:13:24.385 START TEST xnvme_fio_plugin 00:13:24.385 ************************************ 00:13:24.385 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:24.385 10:19:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:24.385 10:19:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:24.385 10:19:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:24.385 10:19:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:24.385 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:24.385 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:24.385 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:24.385 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:24.386 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:24.386 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:24.386 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:24.386 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:24.386 10:19:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:24.386 10:19:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:24.386 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:24.386 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:24.386 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:24.386 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:24.386 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:24.386 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:24.386 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:24.386 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:24.386 10:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:24.386 { 00:13:24.386 "subsystems": [ 00:13:24.386 { 00:13:24.386 "subsystem": "bdev", 00:13:24.386 "config": [ 00:13:24.386 { 00:13:24.386 "params": { 00:13:24.386 "io_mechanism": "libaio", 00:13:24.386 "conserve_cpu": true, 00:13:24.386 "filename": "/dev/nvme0n1", 00:13:24.386 "name": "xnvme_bdev" 00:13:24.386 }, 00:13:24.386 "method": "bdev_xnvme_create" 00:13:24.386 }, 00:13:24.386 { 00:13:24.386 "method": "bdev_wait_for_examine" 00:13:24.386 } 00:13:24.386 ] 00:13:24.386 } 00:13:24.386 ] 00:13:24.386 } 00:13:24.386 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:24.386 fio-3.35 00:13:24.386 Starting 1 thread 00:13:30.997 00:13:30.997 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81068: Fri Nov 29 10:19:09 2024 00:13:30.997 read: IOPS=32.0k, BW=125MiB/s (131MB/s)(625MiB/5001msec) 00:13:30.997 slat (usec): min=4, max=1816, avg=24.90, stdev=97.06 00:13:30.997 clat (usec): min=108, max=4468, avg=1335.04, stdev=560.28 00:13:30.997 lat (usec): min=183, max=4544, avg=1359.94, stdev=551.71 00:13:30.997 clat percentiles (usec): 00:13:30.997 | 1.00th=[ 273], 5.00th=[ 486], 10.00th=[ 644], 20.00th=[ 848], 00:13:30.997 | 30.00th=[ 1029], 40.00th=[ 1172], 50.00th=[ 1303], 60.00th=[ 1434], 00:13:30.997 | 70.00th=[ 1582], 80.00th=[ 1745], 90.00th=[ 2024], 95.00th=[ 2311], 00:13:30.997 | 99.00th=[ 2999], 99.50th=[ 3228], 99.90th=[ 3752], 99.95th=[ 3949], 00:13:30.997 | 99.99th=[ 4293] 00:13:30.997 bw ( KiB/s): min=120560, max=135128, per=100.00%, avg=128770.67, stdev=4720.07, samples=9 00:13:30.997 iops : min=30140, max=33782, avg=32192.67, stdev=1180.02, samples=9 00:13:30.997 lat (usec) : 250=0.75%, 500=4.56%, 750=9.59%, 1000=13.34% 00:13:30.997 lat (msec) : 2=61.02%, 4=10.70%, 10=0.04% 00:13:30.997 cpu : usr=33.56%, sys=57.12%, ctx=24, majf=0, minf=1065 00:13:30.997 IO depths : 1=0.3%, 2=0.9%, 4=2.8%, 8=8.4%, 16=23.9%, 32=61.6%, >=64=2.1% 00:13:30.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.997 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:30.997 issued rwts: total=160017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.997 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:30.997 00:13:30.997 Run status group 0 (all jobs): 00:13:30.997 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=625MiB (655MB), run=5001-5001msec 00:13:30.997 ----------------------------------------------------- 00:13:30.997 Suppressions used: 00:13:30.997 count bytes template 00:13:30.997 1 11 /usr/src/fio/parse.c 00:13:30.997 1 8 libtcmalloc_minimal.so 00:13:30.997 1 904 libcrypto.so 00:13:30.997 ----------------------------------------------------- 00:13:30.997 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:30.997 { 00:13:30.997 "subsystems": [ 00:13:30.997 { 00:13:30.997 "subsystem": "bdev", 00:13:30.997 "config": [ 00:13:30.997 { 00:13:30.997 "params": { 00:13:30.997 "io_mechanism": "libaio", 00:13:30.997 "conserve_cpu": true, 00:13:30.997 "filename": "/dev/nvme0n1", 00:13:30.997 "name": "xnvme_bdev" 00:13:30.997 }, 00:13:30.997 "method": "bdev_xnvme_create" 00:13:30.997 }, 00:13:30.997 { 00:13:30.997 "method": "bdev_wait_for_examine" 00:13:30.997 } 00:13:30.997 ] 00:13:30.997 } 00:13:30.997 ] 00:13:30.997 } 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:30.997 10:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:30.997 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:30.997 fio-3.35 00:13:30.997 Starting 1 thread 00:13:36.292 00:13:36.292 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81153: Fri Nov 29 10:19:15 2024 00:13:36.292 write: IOPS=33.9k, BW=132MiB/s (139MB/s)(663MiB/5001msec); 0 zone resets 00:13:36.292 slat (usec): min=4, max=1637, avg=23.70, stdev=78.71 00:13:36.292 clat (usec): min=107, max=5198, avg=1226.74, stdev=570.66 00:13:36.292 lat (usec): min=179, max=5242, avg=1250.43, stdev=566.42 00:13:36.292 clat percentiles (usec): 00:13:36.292 | 1.00th=[ 251], 5.00th=[ 404], 10.00th=[ 537], 20.00th=[ 750], 00:13:36.292 | 30.00th=[ 914], 40.00th=[ 1057], 50.00th=[ 1188], 60.00th=[ 1319], 00:13:36.292 | 70.00th=[ 1450], 80.00th=[ 1631], 90.00th=[ 1893], 95.00th=[ 2212], 00:13:36.292 | 99.00th=[ 3097], 99.50th=[ 3425], 99.90th=[ 3982], 99.95th=[ 4228], 00:13:36.292 | 99.99th=[ 4817] 00:13:36.292 bw ( KiB/s): min=126024, max=156256, per=99.46%, avg=134951.22, stdev=9322.54, samples=9 00:13:36.292 iops : min=31506, max=39064, avg=33737.78, stdev=2330.65, samples=9 00:13:36.292 lat (usec) : 250=0.98%, 500=7.47%, 750=11.66%, 1000=16.09% 00:13:36.292 lat (msec) : 2=55.91%, 4=7.79%, 10=0.10% 00:13:36.292 cpu : usr=32.56%, sys=56.00%, ctx=36, majf=0, minf=1066 00:13:36.292 IO depths : 1=0.3%, 2=1.0%, 4=3.1%, 8=9.5%, 16=24.7%, 32=59.4%, >=64=2.0% 00:13:36.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.292 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:36.292 issued rwts: total=0,169632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.292 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:36.292 00:13:36.292 Run status group 0 (all jobs): 00:13:36.292 WRITE: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=663MiB (695MB), run=5001-5001msec 00:13:36.292 ----------------------------------------------------- 00:13:36.292 Suppressions used: 00:13:36.292 count bytes template 00:13:36.292 1 11 /usr/src/fio/parse.c 00:13:36.292 1 8 libtcmalloc_minimal.so 00:13:36.292 1 904 libcrypto.so 00:13:36.292 ----------------------------------------------------- 00:13:36.292 00:13:36.292 00:13:36.292 real 0m12.054s 00:13:36.292 user 0m4.424s 00:13:36.292 sys 0m6.210s 00:13:36.292 10:19:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.292 ************************************ 00:13:36.292 END TEST xnvme_fio_plugin 00:13:36.292 ************************************ 00:13:36.292 10:19:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:36.292 10:19:15 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:36.292 10:19:15 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:36.292 10:19:15 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:36.292 10:19:15 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:36.292 10:19:15 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:36.292 10:19:15 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:36.292 10:19:15 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:36.292 10:19:15 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:36.292 10:19:15 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:36.292 10:19:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:36.292 10:19:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.292 10:19:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.292 ************************************ 00:13:36.292 START TEST xnvme_rpc 00:13:36.292 ************************************ 00:13:36.292 10:19:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:36.292 10:19:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:36.292 10:19:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:36.292 10:19:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:36.292 10:19:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:36.292 10:19:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81240 00:13:36.292 10:19:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81240 00:13:36.292 10:19:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81240 ']' 00:13:36.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.292 10:19:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.292 10:19:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:36.292 10:19:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.292 10:19:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:36.292 10:19:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.292 10:19:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:36.553 [2024-11-29 10:19:15.818906] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:36.553 [2024-11-29 10:19:15.819059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81240 ] 00:13:36.553 [2024-11-29 10:19:15.968218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.553 [2024-11-29 10:19:15.999209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.497 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:37.497 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:37.497 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:37.497 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.497 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.497 xnvme_bdev 00:13:37.497 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.497 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:37.497 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:37.497 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:37.497 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.497 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.497 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.497 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:37.497 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:37.497 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:37.497 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81240 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81240 ']' 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81240 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81240 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:37.498 killing process with pid 81240 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81240' 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81240 00:13:37.498 10:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81240 00:13:37.759 00:13:37.759 real 0m1.393s 00:13:37.759 user 0m1.469s 00:13:37.759 sys 0m0.400s 00:13:37.759 10:19:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.759 10:19:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.759 ************************************ 00:13:37.759 END TEST xnvme_rpc 00:13:37.760 ************************************ 00:13:37.760 10:19:17 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:37.760 10:19:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:37.760 10:19:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.760 10:19:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:37.760 ************************************ 00:13:37.760 START TEST xnvme_bdevperf 00:13:37.760 ************************************ 00:13:37.760 10:19:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:37.760 10:19:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:37.760 10:19:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:37.760 10:19:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:37.760 10:19:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:37.760 10:19:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:37.760 10:19:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:37.760 10:19:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:38.020 { 00:13:38.020 "subsystems": [ 00:13:38.020 { 00:13:38.020 "subsystem": "bdev", 00:13:38.020 "config": [ 00:13:38.020 { 00:13:38.020 "params": { 00:13:38.020 "io_mechanism": "io_uring", 00:13:38.020 "conserve_cpu": false, 00:13:38.020 "filename": "/dev/nvme0n1", 00:13:38.020 "name": "xnvme_bdev" 00:13:38.020 }, 00:13:38.020 "method": "bdev_xnvme_create" 00:13:38.020 }, 00:13:38.020 { 00:13:38.020 "method": "bdev_wait_for_examine" 00:13:38.020 } 00:13:38.020 ] 00:13:38.020 } 00:13:38.020 ] 00:13:38.020 } 00:13:38.020 [2024-11-29 10:19:17.263008] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:38.020 [2024-11-29 10:19:17.263153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81292 ] 00:13:38.020 [2024-11-29 10:19:17.412426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.020 [2024-11-29 10:19:17.441178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.281 Running I/O for 5 seconds... 00:13:40.162 36288.00 IOPS, 141.75 MiB/s [2024-11-29T10:19:20.571Z] 36736.00 IOPS, 143.50 MiB/s [2024-11-29T10:19:21.961Z] 36394.67 IOPS, 142.17 MiB/s [2024-11-29T10:19:22.907Z] 36336.00 IOPS, 141.94 MiB/s [2024-11-29T10:19:22.907Z] 36127.60 IOPS, 141.12 MiB/s 00:13:43.442 Latency(us) 00:13:43.442 [2024-11-29T10:19:22.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.442 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:43.442 xnvme_bdev : 5.01 36079.65 140.94 0.00 0.00 1769.69 1178.39 5217.67 00:13:43.442 [2024-11-29T10:19:22.907Z] =================================================================================================================== 00:13:43.442 [2024-11-29T10:19:22.907Z] Total : 36079.65 140.94 0.00 0.00 1769.69 1178.39 5217.67 00:13:43.442 10:19:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:43.442 10:19:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:43.442 10:19:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:43.442 10:19:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:43.442 10:19:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:43.442 { 00:13:43.442 "subsystems": [ 00:13:43.442 { 00:13:43.442 "subsystem": "bdev", 00:13:43.442 "config": [ 00:13:43.442 { 00:13:43.442 "params": { 00:13:43.442 "io_mechanism": "io_uring", 00:13:43.442 "conserve_cpu": false, 00:13:43.442 "filename": "/dev/nvme0n1", 00:13:43.442 "name": "xnvme_bdev" 00:13:43.442 }, 00:13:43.442 "method": "bdev_xnvme_create" 00:13:43.442 }, 00:13:43.442 { 00:13:43.442 "method": "bdev_wait_for_examine" 00:13:43.442 } 00:13:43.442 ] 00:13:43.442 } 00:13:43.442 ] 00:13:43.442 } 00:13:43.442 [2024-11-29 10:19:22.807216] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:43.442 [2024-11-29 10:19:22.807353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81356 ] 00:13:43.704 [2024-11-29 10:19:22.954664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.704 [2024-11-29 10:19:22.983478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.704 Running I/O for 5 seconds... 00:13:46.029 35010.00 IOPS, 136.76 MiB/s [2024-11-29T10:19:26.439Z] 35067.00 IOPS, 136.98 MiB/s [2024-11-29T10:19:27.382Z] 35024.33 IOPS, 136.81 MiB/s [2024-11-29T10:19:28.328Z] 35226.25 IOPS, 137.60 MiB/s 00:13:48.863 Latency(us) 00:13:48.863 [2024-11-29T10:19:28.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.863 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:48.863 xnvme_bdev : 5.00 35002.84 136.73 0.00 0.00 1823.91 109.49 28029.24 00:13:48.863 [2024-11-29T10:19:28.328Z] =================================================================================================================== 00:13:48.863 [2024-11-29T10:19:28.328Z] Total : 35002.84 136.73 0.00 0.00 1823.91 109.49 28029.24 00:13:48.863 00:13:48.863 real 0m11.075s 00:13:48.863 user 0m3.997s 00:13:48.863 sys 0m6.798s 00:13:48.863 10:19:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.863 10:19:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:48.863 ************************************ 00:13:48.863 END TEST xnvme_bdevperf 00:13:48.863 ************************************ 00:13:48.863 10:19:28 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:48.863 10:19:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:48.863 10:19:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.863 10:19:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:49.125 ************************************ 00:13:49.125 START TEST xnvme_fio_plugin 00:13:49.125 ************************************ 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:49.125 10:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:49.125 { 00:13:49.125 "subsystems": [ 00:13:49.125 { 00:13:49.125 "subsystem": "bdev", 00:13:49.125 "config": [ 00:13:49.125 { 00:13:49.125 "params": { 00:13:49.125 "io_mechanism": "io_uring", 00:13:49.125 "conserve_cpu": false, 00:13:49.125 "filename": "/dev/nvme0n1", 00:13:49.125 "name": "xnvme_bdev" 00:13:49.125 }, 00:13:49.125 "method": "bdev_xnvme_create" 00:13:49.125 }, 00:13:49.125 { 00:13:49.125 "method": "bdev_wait_for_examine" 00:13:49.125 } 00:13:49.125 ] 00:13:49.125 } 00:13:49.125 ] 00:13:49.125 } 00:13:49.125 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:49.125 fio-3.35 00:13:49.125 Starting 1 thread 00:13:55.716 00:13:55.716 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81464: Fri Nov 29 10:19:33 2024 00:13:55.716 read: IOPS=34.3k, BW=134MiB/s (140MB/s)(669MiB/5001msec) 00:13:55.716 slat (nsec): min=2906, max=58886, avg=4091.00, stdev=2255.03 00:13:55.716 clat (usec): min=964, max=3545, avg=1703.53, stdev=294.94 00:13:55.716 lat (usec): min=973, max=3575, avg=1707.62, stdev=295.44 00:13:55.716 clat percentiles (usec): 00:13:55.716 | 1.00th=[ 1188], 5.00th=[ 1303], 10.00th=[ 1369], 20.00th=[ 1450], 00:13:55.716 | 30.00th=[ 1532], 40.00th=[ 1598], 50.00th=[ 1663], 60.00th=[ 1729], 00:13:55.716 | 70.00th=[ 1827], 80.00th=[ 1926], 90.00th=[ 2114], 95.00th=[ 2245], 00:13:55.716 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 3064], 99.95th=[ 3195], 00:13:55.716 | 99.99th=[ 3458] 00:13:55.716 bw ( KiB/s): min=130560, max=144896, per=99.92%, avg=136903.11, stdev=4213.86, samples=9 00:13:55.716 iops : min=32640, max=36224, avg=34225.78, stdev=1053.46, samples=9 00:13:55.716 lat (usec) : 1000=0.01% 00:13:55.716 lat (msec) : 2=84.78%, 4=15.21% 00:13:55.716 cpu : usr=32.22%, sys=66.38%, ctx=17, majf=0, minf=1063 00:13:55.716 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:55.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.716 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:55.716 issued rwts: total=171296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.716 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:55.716 00:13:55.716 Run status group 0 (all jobs): 00:13:55.716 READ: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=669MiB (702MB), run=5001-5001msec 00:13:55.716 ----------------------------------------------------- 00:13:55.716 Suppressions used: 00:13:55.716 count bytes template 00:13:55.716 1 11 /usr/src/fio/parse.c 00:13:55.716 1 8 libtcmalloc_minimal.so 00:13:55.716 1 904 libcrypto.so 00:13:55.716 ----------------------------------------------------- 00:13:55.716 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:55.716 10:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:55.716 { 00:13:55.716 "subsystems": [ 00:13:55.716 { 00:13:55.716 "subsystem": "bdev", 00:13:55.716 "config": [ 00:13:55.716 { 00:13:55.716 "params": { 00:13:55.716 "io_mechanism": "io_uring", 00:13:55.716 "conserve_cpu": false, 00:13:55.716 "filename": "/dev/nvme0n1", 00:13:55.716 "name": "xnvme_bdev" 00:13:55.716 }, 00:13:55.716 "method": "bdev_xnvme_create" 00:13:55.716 }, 00:13:55.716 { 00:13:55.716 "method": "bdev_wait_for_examine" 00:13:55.716 } 00:13:55.716 ] 00:13:55.716 } 00:13:55.716 ] 00:13:55.716 } 00:13:55.716 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:55.716 fio-3.35 00:13:55.716 Starting 1 thread 00:14:01.006 00:14:01.006 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81545: Fri Nov 29 10:19:39 2024 00:14:01.006 write: IOPS=35.6k, BW=139MiB/s (146MB/s)(694MiB/5001msec); 0 zone resets 00:14:01.006 slat (nsec): min=2930, max=59747, avg=4129.09, stdev=2242.43 00:14:01.006 clat (usec): min=185, max=4402, avg=1633.40, stdev=275.63 00:14:01.006 lat (usec): min=189, max=4406, avg=1637.53, stdev=276.08 00:14:01.006 clat percentiles (usec): 00:14:01.006 | 1.00th=[ 1156], 5.00th=[ 1254], 10.00th=[ 1319], 20.00th=[ 1401], 00:14:01.006 | 30.00th=[ 1483], 40.00th=[ 1532], 50.00th=[ 1598], 60.00th=[ 1663], 00:14:01.006 | 70.00th=[ 1745], 80.00th=[ 1844], 90.00th=[ 1975], 95.00th=[ 2114], 00:14:01.006 | 99.00th=[ 2409], 99.50th=[ 2573], 99.90th=[ 3195], 99.95th=[ 3720], 00:14:01.006 | 99.99th=[ 4178] 00:14:01.006 bw ( KiB/s): min=137248, max=149488, per=100.00%, avg=142783.11, stdev=3893.64, samples=9 00:14:01.006 iops : min=34312, max=37372, avg=35695.78, stdev=973.41, samples=9 00:14:01.006 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=0.02% 00:14:01.006 lat (msec) : 2=90.81%, 4=9.09%, 10=0.03% 00:14:01.006 cpu : usr=33.46%, sys=65.10%, ctx=17, majf=0, minf=1064 00:14:01.006 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.4%, >=64=1.6% 00:14:01.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.006 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:01.006 issued rwts: total=0,177787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.006 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:01.006 00:14:01.006 Run status group 0 (all jobs): 00:14:01.006 WRITE: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=694MiB (728MB), run=5001-5001msec 00:14:01.006 ----------------------------------------------------- 00:14:01.006 Suppressions used: 00:14:01.006 count bytes template 00:14:01.006 1 11 /usr/src/fio/parse.c 00:14:01.006 1 8 libtcmalloc_minimal.so 00:14:01.006 1 904 libcrypto.so 00:14:01.006 ----------------------------------------------------- 00:14:01.006 00:14:01.006 00:14:01.006 real 0m11.987s 00:14:01.006 user 0m4.427s 00:14:01.006 sys 0m7.105s 00:14:01.006 10:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.006 ************************************ 00:14:01.006 END TEST xnvme_fio_plugin 00:14:01.006 ************************************ 00:14:01.006 10:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:01.006 10:19:40 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:01.006 10:19:40 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:01.006 10:19:40 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:01.006 10:19:40 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:01.006 10:19:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:01.006 10:19:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.006 10:19:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:01.006 ************************************ 00:14:01.007 START TEST xnvme_rpc 00:14:01.007 ************************************ 00:14:01.007 10:19:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:01.007 10:19:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:01.007 10:19:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:01.007 10:19:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:01.007 10:19:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:01.007 10:19:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81627 00:14:01.007 10:19:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81627 00:14:01.007 10:19:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81627 ']' 00:14:01.007 10:19:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.007 10:19:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.007 10:19:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.007 10:19:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.007 10:19:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.007 10:19:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:01.268 [2024-11-29 10:19:40.476842] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:01.268 [2024-11-29 10:19:40.476991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81627 ] 00:14:01.268 [2024-11-29 10:19:40.624408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.268 [2024-11-29 10:19:40.654781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.211 xnvme_bdev 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:02.211 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.212 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.212 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.212 10:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81627 00:14:02.212 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81627 ']' 00:14:02.212 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81627 00:14:02.212 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:02.212 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.212 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81627 00:14:02.212 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:02.212 killing process with pid 81627 00:14:02.212 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:02.212 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81627' 00:14:02.212 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81627 00:14:02.212 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81627 00:14:02.473 00:14:02.473 real 0m1.477s 00:14:02.473 user 0m1.610s 00:14:02.473 sys 0m0.421s 00:14:02.473 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.473 ************************************ 00:14:02.473 END TEST xnvme_rpc 00:14:02.473 ************************************ 00:14:02.473 10:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.473 10:19:41 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:02.473 10:19:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:02.473 10:19:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.473 10:19:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:02.473 ************************************ 00:14:02.473 START TEST xnvme_bdevperf 00:14:02.473 ************************************ 00:14:02.473 10:19:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:02.473 10:19:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:02.473 10:19:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:02.473 10:19:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:02.734 10:19:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:02.734 10:19:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:02.734 10:19:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:02.734 10:19:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:02.734 { 00:14:02.734 "subsystems": [ 00:14:02.734 { 00:14:02.734 "subsystem": "bdev", 00:14:02.734 "config": [ 00:14:02.734 { 00:14:02.734 "params": { 00:14:02.734 "io_mechanism": "io_uring", 00:14:02.734 "conserve_cpu": true, 00:14:02.734 "filename": "/dev/nvme0n1", 00:14:02.734 "name": "xnvme_bdev" 00:14:02.734 }, 00:14:02.734 "method": "bdev_xnvme_create" 00:14:02.734 }, 00:14:02.734 { 00:14:02.734 "method": "bdev_wait_for_examine" 00:14:02.734 } 00:14:02.734 ] 00:14:02.734 } 00:14:02.734 ] 00:14:02.734 } 00:14:02.734 [2024-11-29 10:19:41.997755] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:02.734 [2024-11-29 10:19:41.997909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81684 ] 00:14:02.734 [2024-11-29 10:19:42.144960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.734 [2024-11-29 10:19:42.173922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.994 Running I/O for 5 seconds... 00:14:04.925 35520.00 IOPS, 138.75 MiB/s [2024-11-29T10:19:45.334Z] 35232.00 IOPS, 137.62 MiB/s [2024-11-29T10:19:46.719Z] 35434.67 IOPS, 138.42 MiB/s [2024-11-29T10:19:47.292Z] 35376.00 IOPS, 138.19 MiB/s [2024-11-29T10:19:47.292Z] 35417.60 IOPS, 138.35 MiB/s 00:14:07.827 Latency(us) 00:14:07.827 [2024-11-29T10:19:47.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.827 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:07.827 xnvme_bdev : 5.01 35381.89 138.21 0.00 0.00 1804.49 938.93 4663.14 00:14:07.827 [2024-11-29T10:19:47.292Z] =================================================================================================================== 00:14:07.827 [2024-11-29T10:19:47.292Z] Total : 35381.89 138.21 0.00 0.00 1804.49 938.93 4663.14 00:14:08.089 10:19:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:08.089 10:19:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:08.089 10:19:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:08.089 10:19:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:08.089 10:19:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:08.089 { 00:14:08.089 "subsystems": [ 00:14:08.089 { 00:14:08.089 "subsystem": "bdev", 00:14:08.089 "config": [ 00:14:08.089 { 00:14:08.089 "params": { 00:14:08.089 "io_mechanism": "io_uring", 00:14:08.089 "conserve_cpu": true, 00:14:08.089 "filename": "/dev/nvme0n1", 00:14:08.089 "name": "xnvme_bdev" 00:14:08.089 }, 00:14:08.089 "method": "bdev_xnvme_create" 00:14:08.089 }, 00:14:08.089 { 00:14:08.089 "method": "bdev_wait_for_examine" 00:14:08.089 } 00:14:08.089 ] 00:14:08.089 } 00:14:08.089 ] 00:14:08.089 } 00:14:08.089 [2024-11-29 10:19:47.528925] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:08.089 [2024-11-29 10:19:47.529087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81753 ] 00:14:08.350 [2024-11-29 10:19:47.678239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.351 [2024-11-29 10:19:47.706613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.351 Running I/O for 5 seconds... 00:14:10.682 35052.00 IOPS, 136.92 MiB/s [2024-11-29T10:19:51.092Z] 34645.00 IOPS, 135.33 MiB/s [2024-11-29T10:19:52.039Z] 34305.33 IOPS, 134.01 MiB/s [2024-11-29T10:19:52.984Z] 34360.50 IOPS, 134.22 MiB/s 00:14:13.519 Latency(us) 00:14:13.519 [2024-11-29T10:19:52.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.519 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:13.519 xnvme_bdev : 5.00 34693.68 135.52 0.00 0.00 1840.16 315.08 18551.73 00:14:13.519 [2024-11-29T10:19:52.984Z] =================================================================================================================== 00:14:13.519 [2024-11-29T10:19:52.984Z] Total : 34693.68 135.52 0.00 0.00 1840.16 315.08 18551.73 00:14:13.781 00:14:13.781 real 0m11.095s 00:14:13.781 user 0m5.367s 00:14:13.781 sys 0m5.151s 00:14:13.781 10:19:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.781 ************************************ 00:14:13.781 END TEST xnvme_bdevperf 00:14:13.781 ************************************ 00:14:13.781 10:19:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:13.781 10:19:53 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:13.781 10:19:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:13.781 10:19:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.781 10:19:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:13.781 ************************************ 00:14:13.781 START TEST xnvme_fio_plugin 00:14:13.781 ************************************ 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:13.781 10:19:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:13.781 { 00:14:13.781 "subsystems": [ 00:14:13.781 { 00:14:13.781 "subsystem": "bdev", 00:14:13.781 "config": [ 00:14:13.781 { 00:14:13.781 "params": { 00:14:13.781 "io_mechanism": "io_uring", 00:14:13.781 "conserve_cpu": true, 00:14:13.781 "filename": "/dev/nvme0n1", 00:14:13.781 "name": "xnvme_bdev" 00:14:13.781 }, 00:14:13.781 "method": "bdev_xnvme_create" 00:14:13.781 }, 00:14:13.781 { 00:14:13.781 "method": "bdev_wait_for_examine" 00:14:13.781 } 00:14:13.781 ] 00:14:13.781 } 00:14:13.781 ] 00:14:13.781 } 00:14:14.044 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:14.044 fio-3.35 00:14:14.044 Starting 1 thread 00:14:19.338 00:14:19.338 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81857: Fri Nov 29 10:19:58 2024 00:14:19.338 read: IOPS=34.5k, BW=135MiB/s (141MB/s)(675MiB/5001msec) 00:14:19.338 slat (usec): min=2, max=156, avg= 3.83, stdev= 2.05 00:14:19.338 clat (usec): min=938, max=3453, avg=1698.24, stdev=283.43 00:14:19.338 lat (usec): min=941, max=3490, avg=1702.07, stdev=283.79 00:14:19.338 clat percentiles (usec): 00:14:19.338 | 1.00th=[ 1188], 5.00th=[ 1319], 10.00th=[ 1385], 20.00th=[ 1467], 00:14:19.338 | 30.00th=[ 1532], 40.00th=[ 1598], 50.00th=[ 1663], 60.00th=[ 1729], 00:14:19.338 | 70.00th=[ 1811], 80.00th=[ 1909], 90.00th=[ 2073], 95.00th=[ 2212], 00:14:19.338 | 99.00th=[ 2540], 99.50th=[ 2638], 99.90th=[ 2868], 99.95th=[ 3163], 00:14:19.338 | 99.99th=[ 3392] 00:14:19.338 bw ( KiB/s): min=133120, max=145117, per=100.00%, avg=138719.67, stdev=4013.20, samples=9 00:14:19.338 iops : min=33280, max=36279, avg=34679.89, stdev=1003.25, samples=9 00:14:19.338 lat (usec) : 1000=0.02% 00:14:19.338 lat (msec) : 2=86.20%, 4=13.79% 00:14:19.338 cpu : usr=49.60%, sys=46.40%, ctx=12, majf=0, minf=1063 00:14:19.338 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:19.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.338 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:19.338 issued rwts: total=172736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:19.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:19.338 00:14:19.338 Run status group 0 (all jobs): 00:14:19.338 READ: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=675MiB (708MB), run=5001-5001msec 00:14:19.912 ----------------------------------------------------- 00:14:19.912 Suppressions used: 00:14:19.912 count bytes template 00:14:19.912 1 11 /usr/src/fio/parse.c 00:14:19.912 1 8 libtcmalloc_minimal.so 00:14:19.912 1 904 libcrypto.so 00:14:19.912 ----------------------------------------------------- 00:14:19.912 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:19.912 10:19:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:19.912 { 00:14:19.912 "subsystems": [ 00:14:19.912 { 00:14:19.912 "subsystem": "bdev", 00:14:19.912 "config": [ 00:14:19.912 { 00:14:19.912 "params": { 00:14:19.912 "io_mechanism": "io_uring", 00:14:19.912 "conserve_cpu": true, 00:14:19.912 "filename": "/dev/nvme0n1", 00:14:19.912 "name": "xnvme_bdev" 00:14:19.912 }, 00:14:19.912 "method": "bdev_xnvme_create" 00:14:19.912 }, 00:14:19.912 { 00:14:19.912 "method": "bdev_wait_for_examine" 00:14:19.912 } 00:14:19.912 ] 00:14:19.912 } 00:14:19.912 ] 00:14:19.912 } 00:14:20.174 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:20.174 fio-3.35 00:14:20.174 Starting 1 thread 00:14:25.471 00:14:25.471 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81937: Fri Nov 29 10:20:04 2024 00:14:25.471 write: IOPS=35.4k, BW=138MiB/s (145MB/s)(692MiB/5002msec); 0 zone resets 00:14:25.471 slat (nsec): min=2935, max=75828, avg=4191.97, stdev=2358.70 00:14:25.471 clat (usec): min=901, max=5246, avg=1637.18, stdev=257.46 00:14:25.471 lat (usec): min=905, max=5249, avg=1641.37, stdev=257.99 00:14:25.471 clat percentiles (usec): 00:14:25.471 | 1.00th=[ 1205], 5.00th=[ 1303], 10.00th=[ 1352], 20.00th=[ 1434], 00:14:25.471 | 30.00th=[ 1483], 40.00th=[ 1549], 50.00th=[ 1598], 60.00th=[ 1663], 00:14:25.471 | 70.00th=[ 1729], 80.00th=[ 1811], 90.00th=[ 1958], 95.00th=[ 2114], 00:14:25.471 | 99.00th=[ 2409], 99.50th=[ 2573], 99.90th=[ 3261], 99.95th=[ 3458], 00:14:25.471 | 99.99th=[ 4047] 00:14:25.471 bw ( KiB/s): min=137048, max=144640, per=100.00%, avg=141976.00, stdev=2438.31, samples=9 00:14:25.471 iops : min=34262, max=36160, avg=35494.00, stdev=609.58, samples=9 00:14:25.471 lat (usec) : 1000=0.02% 00:14:25.471 lat (msec) : 2=91.69%, 4=8.28%, 10=0.01% 00:14:25.471 cpu : usr=44.01%, sys=51.27%, ctx=16, majf=0, minf=1064 00:14:25.471 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:25.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.471 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:25.471 issued rwts: total=0,177097,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:25.471 00:14:25.471 Run status group 0 (all jobs): 00:14:25.471 WRITE: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=692MiB (725MB), run=5002-5002msec 00:14:26.043 ----------------------------------------------------- 00:14:26.043 Suppressions used: 00:14:26.043 count bytes template 00:14:26.043 1 11 /usr/src/fio/parse.c 00:14:26.043 1 8 libtcmalloc_minimal.so 00:14:26.043 1 904 libcrypto.so 00:14:26.043 ----------------------------------------------------- 00:14:26.043 00:14:26.043 00:14:26.043 real 0m12.154s 00:14:26.043 user 0m5.883s 00:14:26.043 sys 0m5.516s 00:14:26.043 10:20:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.043 ************************************ 00:14:26.043 END TEST xnvme_fio_plugin 00:14:26.043 ************************************ 00:14:26.043 10:20:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:26.043 10:20:05 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:26.043 10:20:05 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:26.043 10:20:05 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:26.043 10:20:05 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:26.043 10:20:05 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:26.043 10:20:05 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:26.043 10:20:05 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:26.043 10:20:05 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:26.043 10:20:05 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:26.043 10:20:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:26.043 10:20:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.043 10:20:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.043 ************************************ 00:14:26.043 START TEST xnvme_rpc 00:14:26.043 ************************************ 00:14:26.043 10:20:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:26.043 10:20:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:26.043 10:20:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:26.043 10:20:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:26.043 10:20:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:26.043 10:20:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82018 00:14:26.043 10:20:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82018 00:14:26.043 10:20:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82018 ']' 00:14:26.043 10:20:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.043 10:20:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.043 10:20:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.043 10:20:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.043 10:20:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:26.043 10:20:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.043 [2024-11-29 10:20:05.410422] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:26.043 [2024-11-29 10:20:05.410571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82018 ] 00:14:26.303 [2024-11-29 10:20:05.555945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.303 [2024-11-29 10:20:05.584374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.876 xnvme_bdev 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:26.876 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82018 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82018 ']' 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82018 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82018 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82018' 00:14:27.137 killing process with pid 82018 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82018 00:14:27.137 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82018 00:14:27.399 00:14:27.399 real 0m1.468s 00:14:27.399 user 0m1.551s 00:14:27.399 sys 0m0.387s 00:14:27.399 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.399 ************************************ 00:14:27.399 END TEST xnvme_rpc 00:14:27.399 ************************************ 00:14:27.399 10:20:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.399 10:20:06 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:27.399 10:20:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:27.399 10:20:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.399 10:20:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.399 ************************************ 00:14:27.399 START TEST xnvme_bdevperf 00:14:27.399 ************************************ 00:14:27.399 10:20:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:27.661 10:20:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:27.661 10:20:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:27.661 10:20:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:27.661 10:20:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:27.661 10:20:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:27.661 10:20:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:27.661 10:20:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:27.661 { 00:14:27.661 "subsystems": [ 00:14:27.661 { 00:14:27.661 "subsystem": "bdev", 00:14:27.661 "config": [ 00:14:27.661 { 00:14:27.661 "params": { 00:14:27.661 "io_mechanism": "io_uring_cmd", 00:14:27.661 "conserve_cpu": false, 00:14:27.661 "filename": "/dev/ng0n1", 00:14:27.661 "name": "xnvme_bdev" 00:14:27.661 }, 00:14:27.661 "method": "bdev_xnvme_create" 00:14:27.661 }, 00:14:27.661 { 00:14:27.661 "method": "bdev_wait_for_examine" 00:14:27.661 } 00:14:27.661 ] 00:14:27.661 } 00:14:27.661 ] 00:14:27.661 } 00:14:27.661 [2024-11-29 10:20:06.929916] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:27.661 [2024-11-29 10:20:06.930047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82070 ] 00:14:27.661 [2024-11-29 10:20:07.077337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.661 [2024-11-29 10:20:07.106119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.924 Running I/O for 5 seconds... 00:14:29.833 38208.00 IOPS, 149.25 MiB/s [2024-11-29T10:20:10.239Z] 37184.00 IOPS, 145.25 MiB/s [2024-11-29T10:20:11.624Z] 36458.67 IOPS, 142.42 MiB/s [2024-11-29T10:20:12.570Z] 35664.00 IOPS, 139.31 MiB/s 00:14:33.105 Latency(us) 00:14:33.105 [2024-11-29T10:20:12.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.105 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:33.105 xnvme_bdev : 5.00 35349.15 138.08 0.00 0.00 1806.26 1046.06 4159.02 00:14:33.105 [2024-11-29T10:20:12.570Z] =================================================================================================================== 00:14:33.105 [2024-11-29T10:20:12.570Z] Total : 35349.15 138.08 0.00 0.00 1806.26 1046.06 4159.02 00:14:33.105 10:20:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:33.105 10:20:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:33.105 10:20:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:33.105 10:20:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:33.105 10:20:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:33.105 { 00:14:33.105 "subsystems": [ 00:14:33.105 { 00:14:33.105 "subsystem": "bdev", 00:14:33.105 "config": [ 00:14:33.105 { 00:14:33.105 "params": { 00:14:33.105 "io_mechanism": "io_uring_cmd", 00:14:33.105 "conserve_cpu": false, 00:14:33.105 "filename": "/dev/ng0n1", 00:14:33.105 "name": "xnvme_bdev" 00:14:33.105 }, 00:14:33.105 "method": "bdev_xnvme_create" 00:14:33.105 }, 00:14:33.105 { 00:14:33.105 "method": "bdev_wait_for_examine" 00:14:33.105 } 00:14:33.105 ] 00:14:33.105 } 00:14:33.105 ] 00:14:33.105 } 00:14:33.105 [2024-11-29 10:20:12.474076] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:33.105 [2024-11-29 10:20:12.474220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82143 ] 00:14:33.367 [2024-11-29 10:20:12.621374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.367 [2024-11-29 10:20:12.649646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.367 Running I/O for 5 seconds... 00:14:35.700 39820.00 IOPS, 155.55 MiB/s [2024-11-29T10:20:16.112Z] 38104.50 IOPS, 148.85 MiB/s [2024-11-29T10:20:16.775Z] 37008.33 IOPS, 144.56 MiB/s [2024-11-29T10:20:18.161Z] 36049.00 IOPS, 140.82 MiB/s [2024-11-29T10:20:18.161Z] 34726.60 IOPS, 135.65 MiB/s 00:14:38.696 Latency(us) 00:14:38.696 [2024-11-29T10:20:18.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.696 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:38.696 xnvme_bdev : 5.01 34680.03 135.47 0.00 0.00 1841.26 114.22 11594.83 00:14:38.696 [2024-11-29T10:20:18.161Z] =================================================================================================================== 00:14:38.696 [2024-11-29T10:20:18.161Z] Total : 34680.03 135.47 0.00 0.00 1841.26 114.22 11594.83 00:14:38.696 10:20:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:38.696 10:20:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:38.696 10:20:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:38.696 10:20:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:38.696 10:20:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:38.696 { 00:14:38.696 "subsystems": [ 00:14:38.696 { 00:14:38.696 "subsystem": "bdev", 00:14:38.696 "config": [ 00:14:38.696 { 00:14:38.696 "params": { 00:14:38.696 "io_mechanism": "io_uring_cmd", 00:14:38.697 "conserve_cpu": false, 00:14:38.697 "filename": "/dev/ng0n1", 00:14:38.697 "name": "xnvme_bdev" 00:14:38.697 }, 00:14:38.697 "method": "bdev_xnvme_create" 00:14:38.697 }, 00:14:38.697 { 00:14:38.697 "method": "bdev_wait_for_examine" 00:14:38.697 } 00:14:38.697 ] 00:14:38.697 } 00:14:38.697 ] 00:14:38.697 } 00:14:38.697 [2024-11-29 10:20:18.036028] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:38.697 [2024-11-29 10:20:18.036170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82213 ] 00:14:38.958 [2024-11-29 10:20:18.181385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.958 [2024-11-29 10:20:18.210858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.958 Running I/O for 5 seconds... 00:14:41.293 77632.00 IOPS, 303.25 MiB/s [2024-11-29T10:20:21.702Z] 77664.00 IOPS, 303.38 MiB/s [2024-11-29T10:20:22.647Z] 78229.33 IOPS, 305.58 MiB/s [2024-11-29T10:20:23.588Z] 77824.00 IOPS, 304.00 MiB/s 00:14:44.123 Latency(us) 00:14:44.123 [2024-11-29T10:20:23.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.123 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:44.123 xnvme_bdev : 5.00 78338.15 306.01 0.00 0.00 813.47 392.27 2545.82 00:14:44.123 [2024-11-29T10:20:23.588Z] =================================================================================================================== 00:14:44.123 [2024-11-29T10:20:23.588Z] Total : 78338.15 306.01 0.00 0.00 813.47 392.27 2545.82 00:14:44.123 10:20:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:44.123 10:20:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:44.123 10:20:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:44.123 10:20:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:44.123 10:20:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:44.123 { 00:14:44.123 "subsystems": [ 00:14:44.123 { 00:14:44.123 "subsystem": "bdev", 00:14:44.123 "config": [ 00:14:44.123 { 00:14:44.123 "params": { 00:14:44.123 "io_mechanism": "io_uring_cmd", 00:14:44.123 "conserve_cpu": false, 00:14:44.123 "filename": "/dev/ng0n1", 00:14:44.123 "name": "xnvme_bdev" 00:14:44.123 }, 00:14:44.123 "method": "bdev_xnvme_create" 00:14:44.123 }, 00:14:44.123 { 00:14:44.123 "method": "bdev_wait_for_examine" 00:14:44.123 } 00:14:44.123 ] 00:14:44.123 } 00:14:44.123 ] 00:14:44.123 } 00:14:44.123 [2024-11-29 10:20:23.571535] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:44.123 [2024-11-29 10:20:23.571642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82282 ] 00:14:44.381 [2024-11-29 10:20:23.712120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.381 [2024-11-29 10:20:23.732879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.381 Running I/O for 5 seconds... 00:14:46.683 71952.00 IOPS, 281.06 MiB/s [2024-11-29T10:20:27.091Z] 68249.00 IOPS, 266.60 MiB/s [2024-11-29T10:20:28.031Z] 68326.00 IOPS, 266.90 MiB/s [2024-11-29T10:20:28.977Z] 66657.50 IOPS, 260.38 MiB/s [2024-11-29T10:20:28.977Z] 64539.60 IOPS, 252.11 MiB/s 00:14:49.512 Latency(us) 00:14:49.512 [2024-11-29T10:20:28.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.512 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:49.512 xnvme_bdev : 5.00 64503.52 251.97 0.00 0.00 988.88 126.82 10687.41 00:14:49.512 [2024-11-29T10:20:28.977Z] =================================================================================================================== 00:14:49.512 [2024-11-29T10:20:28.977Z] Total : 64503.52 251.97 0.00 0.00 988.88 126.82 10687.41 00:14:49.774 00:14:49.774 real 0m22.209s 00:14:49.774 user 0m11.392s 00:14:49.774 sys 0m10.330s 00:14:49.774 10:20:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.774 ************************************ 00:14:49.774 END TEST xnvme_bdevperf 00:14:49.774 ************************************ 00:14:49.774 10:20:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:49.774 10:20:29 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:49.774 10:20:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:49.774 10:20:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.774 10:20:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:49.774 ************************************ 00:14:49.774 START TEST xnvme_fio_plugin 00:14:49.774 ************************************ 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:49.774 10:20:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:49.774 { 00:14:49.774 "subsystems": [ 00:14:49.774 { 00:14:49.774 "subsystem": "bdev", 00:14:49.774 "config": [ 00:14:49.774 { 00:14:49.774 "params": { 00:14:49.774 "io_mechanism": "io_uring_cmd", 00:14:49.774 "conserve_cpu": false, 00:14:49.774 "filename": "/dev/ng0n1", 00:14:49.774 "name": "xnvme_bdev" 00:14:49.774 }, 00:14:49.774 "method": "bdev_xnvme_create" 00:14:49.774 }, 00:14:49.774 { 00:14:49.774 "method": "bdev_wait_for_examine" 00:14:49.774 } 00:14:49.774 ] 00:14:49.774 } 00:14:49.774 ] 00:14:49.774 } 00:14:50.035 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:50.035 fio-3.35 00:14:50.035 Starting 1 thread 00:14:56.619 00:14:56.619 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82384: Fri Nov 29 10:20:34 2024 00:14:56.619 read: IOPS=47.7k, BW=186MiB/s (195MB/s)(931MiB/5001msec) 00:14:56.619 slat (nsec): min=2143, max=72211, avg=3770.38, stdev=1796.77 00:14:56.619 clat (usec): min=575, max=3682, avg=1192.57, stdev=378.21 00:14:56.619 lat (usec): min=577, max=3718, avg=1196.34, stdev=378.52 00:14:56.619 clat percentiles (usec): 00:14:56.619 | 1.00th=[ 693], 5.00th=[ 775], 10.00th=[ 840], 20.00th=[ 898], 00:14:56.619 | 30.00th=[ 938], 40.00th=[ 988], 50.00th=[ 1037], 60.00th=[ 1106], 00:14:56.619 | 70.00th=[ 1369], 80.00th=[ 1549], 90.00th=[ 1762], 95.00th=[ 1909], 00:14:56.619 | 99.00th=[ 2278], 99.50th=[ 2409], 99.90th=[ 2737], 99.95th=[ 2900], 00:14:56.619 | 99.99th=[ 3458] 00:14:56.619 bw ( KiB/s): min=139776, max=234496, per=97.89%, avg=186651.56, stdev=44815.91, samples=9 00:14:56.619 iops : min=34944, max=58624, avg=46662.89, stdev=11203.98, samples=9 00:14:56.619 lat (usec) : 750=3.38%, 1000=39.05% 00:14:56.619 lat (msec) : 2=54.16%, 4=3.41% 00:14:56.619 cpu : usr=36.76%, sys=62.08%, ctx=14, majf=0, minf=1063 00:14:56.619 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:56.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.619 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:56.619 issued rwts: total=238399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.619 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:56.619 00:14:56.619 Run status group 0 (all jobs): 00:14:56.619 READ: bw=186MiB/s (195MB/s), 186MiB/s-186MiB/s (195MB/s-195MB/s), io=931MiB (976MB), run=5001-5001msec 00:14:56.619 ----------------------------------------------------- 00:14:56.619 Suppressions used: 00:14:56.619 count bytes template 00:14:56.619 1 11 /usr/src/fio/parse.c 00:14:56.619 1 8 libtcmalloc_minimal.so 00:14:56.619 1 904 libcrypto.so 00:14:56.619 ----------------------------------------------------- 00:14:56.619 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:56.619 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:56.620 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:56.620 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:56.620 10:20:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:56.620 { 00:14:56.620 "subsystems": [ 00:14:56.620 { 00:14:56.620 "subsystem": "bdev", 00:14:56.620 "config": [ 00:14:56.620 { 00:14:56.620 "params": { 00:14:56.620 "io_mechanism": "io_uring_cmd", 00:14:56.620 "conserve_cpu": false, 00:14:56.620 "filename": "/dev/ng0n1", 00:14:56.620 "name": "xnvme_bdev" 00:14:56.620 }, 00:14:56.620 "method": "bdev_xnvme_create" 00:14:56.620 }, 00:14:56.620 { 00:14:56.620 "method": "bdev_wait_for_examine" 00:14:56.620 } 00:14:56.620 ] 00:14:56.620 } 00:14:56.620 ] 00:14:56.620 } 00:14:56.620 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:56.620 fio-3.35 00:14:56.620 Starting 1 thread 00:15:01.897 00:15:01.897 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82468: Fri Nov 29 10:20:40 2024 00:15:01.897 write: IOPS=46.4k, BW=181MiB/s (190MB/s)(906MiB/5001msec); 0 zone resets 00:15:01.897 slat (usec): min=2, max=1096, avg= 4.23, stdev= 7.14 00:15:01.897 clat (usec): min=134, max=10176, avg=1229.21, stdev=541.19 00:15:01.897 lat (usec): min=142, max=10191, avg=1233.44, stdev=542.24 00:15:01.897 clat percentiles (usec): 00:15:01.898 | 1.00th=[ 644], 5.00th=[ 717], 10.00th=[ 766], 20.00th=[ 832], 00:15:01.898 | 30.00th=[ 889], 40.00th=[ 963], 50.00th=[ 1057], 60.00th=[ 1205], 00:15:01.898 | 70.00th=[ 1385], 80.00th=[ 1598], 90.00th=[ 1876], 95.00th=[ 2147], 00:15:01.898 | 99.00th=[ 2900], 99.50th=[ 3425], 99.90th=[ 5735], 99.95th=[ 6456], 00:15:01.898 | 99.99th=[ 9896] 00:15:01.898 bw ( KiB/s): min=126288, max=261632, per=100.00%, avg=189946.00, stdev=51908.07, samples=9 00:15:01.898 iops : min=31572, max=65408, avg=47486.44, stdev=12977.09, samples=9 00:15:01.898 lat (usec) : 250=0.02%, 500=0.25%, 750=8.30%, 1000=36.25% 00:15:01.898 lat (msec) : 2=47.82%, 4=7.05%, 10=0.30%, 20=0.01% 00:15:01.898 cpu : usr=39.72%, sys=55.04%, ctx=35, majf=0, minf=1064 00:15:01.898 IO depths : 1=1.3%, 2=2.5%, 4=5.2%, 8=10.5%, 16=21.6%, 32=57.0%, >=64=1.9% 00:15:01.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.898 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.4%, 64=1.4%, >=64=0.0% 00:15:01.898 issued rwts: total=0,231952,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.898 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:01.898 00:15:01.898 Run status group 0 (all jobs): 00:15:01.898 WRITE: bw=181MiB/s (190MB/s), 181MiB/s-181MiB/s (190MB/s-190MB/s), io=906MiB (950MB), run=5001-5001msec 00:15:01.898 ----------------------------------------------------- 00:15:01.898 Suppressions used: 00:15:01.898 count bytes template 00:15:01.898 1 11 /usr/src/fio/parse.c 00:15:01.898 1 8 libtcmalloc_minimal.so 00:15:01.898 1 904 libcrypto.so 00:15:01.898 ----------------------------------------------------- 00:15:01.898 00:15:01.898 00:15:01.898 real 0m12.097s 00:15:01.898 user 0m5.015s 00:15:01.898 sys 0m6.439s 00:15:01.898 10:20:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.898 10:20:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:01.898 ************************************ 00:15:01.898 END TEST xnvme_fio_plugin 00:15:01.898 ************************************ 00:15:01.898 10:20:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:01.898 10:20:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:01.898 10:20:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:01.898 10:20:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:01.898 10:20:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:01.898 10:20:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.898 10:20:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.898 ************************************ 00:15:01.898 START TEST xnvme_rpc 00:15:01.898 ************************************ 00:15:01.898 10:20:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:01.898 10:20:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:01.898 10:20:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:01.898 10:20:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:01.898 10:20:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:01.898 10:20:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82555 00:15:01.898 10:20:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82555 00:15:01.898 10:20:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82555 ']' 00:15:01.898 10:20:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.898 10:20:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.898 10:20:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.898 10:20:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.898 10:20:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:01.898 10:20:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.160 [2024-11-29 10:20:41.396245] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:02.160 [2024-11-29 10:20:41.396421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82555 ] 00:15:02.160 [2024-11-29 10:20:41.547673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.160 [2024-11-29 10:20:41.588644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.104 xnvme_bdev 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82555 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82555 ']' 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82555 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82555 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:03.104 killing process with pid 82555 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82555' 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82555 00:15:03.104 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82555 00:15:03.366 00:15:03.366 real 0m1.431s 00:15:03.366 user 0m1.460s 00:15:03.366 sys 0m0.466s 00:15:03.366 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.366 ************************************ 00:15:03.366 END TEST xnvme_rpc 00:15:03.366 ************************************ 00:15:03.366 10:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.366 10:20:42 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:03.366 10:20:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:03.366 10:20:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.366 10:20:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:03.366 ************************************ 00:15:03.366 START TEST xnvme_bdevperf 00:15:03.366 ************************************ 00:15:03.366 10:20:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:03.366 10:20:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:03.366 10:20:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:03.366 10:20:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:03.366 10:20:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:03.366 10:20:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:03.366 10:20:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:03.367 10:20:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:03.629 { 00:15:03.629 "subsystems": [ 00:15:03.629 { 00:15:03.629 "subsystem": "bdev", 00:15:03.629 "config": [ 00:15:03.629 { 00:15:03.629 "params": { 00:15:03.629 "io_mechanism": "io_uring_cmd", 00:15:03.629 "conserve_cpu": true, 00:15:03.629 "filename": "/dev/ng0n1", 00:15:03.629 "name": "xnvme_bdev" 00:15:03.629 }, 00:15:03.629 "method": "bdev_xnvme_create" 00:15:03.629 }, 00:15:03.629 { 00:15:03.629 "method": "bdev_wait_for_examine" 00:15:03.629 } 00:15:03.629 ] 00:15:03.629 } 00:15:03.629 ] 00:15:03.629 } 00:15:03.629 [2024-11-29 10:20:42.869646] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:03.629 [2024-11-29 10:20:42.869839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82613 ] 00:15:03.629 [2024-11-29 10:20:43.016125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.629 [2024-11-29 10:20:43.045377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.891 Running I/O for 5 seconds... 00:15:05.780 39936.00 IOPS, 156.00 MiB/s [2024-11-29T10:20:46.189Z] 41760.00 IOPS, 163.12 MiB/s [2024-11-29T10:20:47.604Z] 42880.00 IOPS, 167.50 MiB/s [2024-11-29T10:20:48.179Z] 42912.00 IOPS, 167.62 MiB/s 00:15:08.714 Latency(us) 00:15:08.714 [2024-11-29T10:20:48.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.714 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:08.714 xnvme_bdev : 5.00 41690.35 162.85 0.00 0.00 1531.60 771.94 4637.93 00:15:08.714 [2024-11-29T10:20:48.179Z] =================================================================================================================== 00:15:08.714 [2024-11-29T10:20:48.179Z] Total : 41690.35 162.85 0.00 0.00 1531.60 771.94 4637.93 00:15:08.976 10:20:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:08.976 10:20:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:08.976 10:20:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:08.976 10:20:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:08.976 10:20:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:08.976 { 00:15:08.976 "subsystems": [ 00:15:08.976 { 00:15:08.976 "subsystem": "bdev", 00:15:08.976 "config": [ 00:15:08.976 { 00:15:08.976 "params": { 00:15:08.976 "io_mechanism": "io_uring_cmd", 00:15:08.976 "conserve_cpu": true, 00:15:08.976 "filename": "/dev/ng0n1", 00:15:08.976 "name": "xnvme_bdev" 00:15:08.976 }, 00:15:08.976 "method": "bdev_xnvme_create" 00:15:08.976 }, 00:15:08.976 { 00:15:08.976 "method": "bdev_wait_for_examine" 00:15:08.976 } 00:15:08.976 ] 00:15:08.976 } 00:15:08.976 ] 00:15:08.976 } 00:15:08.976 [2024-11-29 10:20:48.406833] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:08.976 [2024-11-29 10:20:48.406980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82677 ] 00:15:09.237 [2024-11-29 10:20:48.550247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.237 [2024-11-29 10:20:48.579711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.237 Running I/O for 5 seconds... 00:15:11.239 36871.00 IOPS, 144.03 MiB/s [2024-11-29T10:20:52.090Z] 40117.00 IOPS, 156.71 MiB/s [2024-11-29T10:20:53.034Z] 40840.00 IOPS, 159.53 MiB/s [2024-11-29T10:20:53.978Z] 41210.25 IOPS, 160.98 MiB/s [2024-11-29T10:20:53.978Z] 40175.60 IOPS, 156.94 MiB/s 00:15:14.513 Latency(us) 00:15:14.513 [2024-11-29T10:20:53.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.513 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:14.513 xnvme_bdev : 5.00 40160.46 156.88 0.00 0.00 1589.53 825.50 4083.40 00:15:14.513 [2024-11-29T10:20:53.978Z] =================================================================================================================== 00:15:14.513 [2024-11-29T10:20:53.978Z] Total : 40160.46 156.88 0.00 0.00 1589.53 825.50 4083.40 00:15:14.513 10:20:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:14.513 10:20:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:14.513 10:20:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:14.513 10:20:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:14.513 10:20:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:14.513 { 00:15:14.513 "subsystems": [ 00:15:14.513 { 00:15:14.513 "subsystem": "bdev", 00:15:14.513 "config": [ 00:15:14.513 { 00:15:14.513 "params": { 00:15:14.513 "io_mechanism": "io_uring_cmd", 00:15:14.513 "conserve_cpu": true, 00:15:14.513 "filename": "/dev/ng0n1", 00:15:14.513 "name": "xnvme_bdev" 00:15:14.513 }, 00:15:14.513 "method": "bdev_xnvme_create" 00:15:14.513 }, 00:15:14.513 { 00:15:14.513 "method": "bdev_wait_for_examine" 00:15:14.513 } 00:15:14.513 ] 00:15:14.513 } 00:15:14.513 ] 00:15:14.513 } 00:15:14.513 [2024-11-29 10:20:53.951600] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:14.513 [2024-11-29 10:20:53.951742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82741 ] 00:15:14.774 [2024-11-29 10:20:54.095276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.774 [2024-11-29 10:20:54.124263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.774 Running I/O for 5 seconds... 00:15:17.103 79168.00 IOPS, 309.25 MiB/s [2024-11-29T10:20:57.512Z] 79520.00 IOPS, 310.62 MiB/s [2024-11-29T10:20:58.455Z] 79722.67 IOPS, 311.42 MiB/s [2024-11-29T10:20:59.391Z] 79696.00 IOPS, 311.31 MiB/s [2024-11-29T10:20:59.391Z] 81996.80 IOPS, 320.30 MiB/s 00:15:19.926 Latency(us) 00:15:19.926 [2024-11-29T10:20:59.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.926 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:19.926 xnvme_bdev : 5.00 81959.75 320.16 0.00 0.00 777.43 341.86 4133.81 00:15:19.926 [2024-11-29T10:20:59.391Z] =================================================================================================================== 00:15:19.926 [2024-11-29T10:20:59.391Z] Total : 81959.75 320.16 0.00 0.00 777.43 341.86 4133.81 00:15:19.926 10:20:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:19.926 10:20:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:19.926 10:20:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:19.926 10:20:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:19.926 10:20:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:20.186 { 00:15:20.186 "subsystems": [ 00:15:20.186 { 00:15:20.186 "subsystem": "bdev", 00:15:20.186 "config": [ 00:15:20.186 { 00:15:20.186 "params": { 00:15:20.186 "io_mechanism": "io_uring_cmd", 00:15:20.186 "conserve_cpu": true, 00:15:20.186 "filename": "/dev/ng0n1", 00:15:20.186 "name": "xnvme_bdev" 00:15:20.186 }, 00:15:20.186 "method": "bdev_xnvme_create" 00:15:20.186 }, 00:15:20.186 { 00:15:20.186 "method": "bdev_wait_for_examine" 00:15:20.186 } 00:15:20.186 ] 00:15:20.186 } 00:15:20.186 ] 00:15:20.186 } 00:15:20.186 [2024-11-29 10:20:59.407714] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:20.186 [2024-11-29 10:20:59.407837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82814 ] 00:15:20.186 [2024-11-29 10:20:59.547878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.186 [2024-11-29 10:20:59.574976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.445 Running I/O for 5 seconds... 00:15:22.311 42623.00 IOPS, 166.50 MiB/s [2024-11-29T10:21:02.715Z] 43333.00 IOPS, 169.27 MiB/s [2024-11-29T10:21:04.098Z] 41708.67 IOPS, 162.92 MiB/s [2024-11-29T10:21:05.045Z] 39373.50 IOPS, 153.80 MiB/s [2024-11-29T10:21:05.045Z] 37990.20 IOPS, 148.40 MiB/s 00:15:25.580 Latency(us) 00:15:25.580 [2024-11-29T10:21:05.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.580 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:25.580 xnvme_bdev : 5.00 37969.36 148.32 0.00 0.00 1680.01 281.99 20568.22 00:15:25.580 [2024-11-29T10:21:05.045Z] =================================================================================================================== 00:15:25.580 [2024-11-29T10:21:05.045Z] Total : 37969.36 148.32 0.00 0.00 1680.01 281.99 20568.22 00:15:25.580 00:15:25.580 real 0m22.067s 00:15:25.580 user 0m14.533s 00:15:25.580 sys 0m5.416s 00:15:25.580 10:21:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.580 ************************************ 00:15:25.580 END TEST xnvme_bdevperf 00:15:25.580 ************************************ 00:15:25.580 10:21:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:25.580 10:21:04 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:25.580 10:21:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:25.580 10:21:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.580 10:21:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.580 ************************************ 00:15:25.580 START TEST xnvme_fio_plugin 00:15:25.580 ************************************ 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:25.580 10:21:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:25.580 { 00:15:25.580 "subsystems": [ 00:15:25.580 { 00:15:25.580 "subsystem": "bdev", 00:15:25.580 "config": [ 00:15:25.580 { 00:15:25.580 "params": { 00:15:25.580 "io_mechanism": "io_uring_cmd", 00:15:25.580 "conserve_cpu": true, 00:15:25.580 "filename": "/dev/ng0n1", 00:15:25.580 "name": "xnvme_bdev" 00:15:25.580 }, 00:15:25.580 "method": "bdev_xnvme_create" 00:15:25.580 }, 00:15:25.580 { 00:15:25.580 "method": "bdev_wait_for_examine" 00:15:25.580 } 00:15:25.580 ] 00:15:25.580 } 00:15:25.580 ] 00:15:25.580 } 00:15:25.840 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:25.840 fio-3.35 00:15:25.840 Starting 1 thread 00:15:31.137 00:15:31.137 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82921: Fri Nov 29 10:21:10 2024 00:15:31.137 read: IOPS=40.8k, BW=159MiB/s (167MB/s)(797MiB/5002msec) 00:15:31.137 slat (nsec): min=2891, max=93651, avg=3490.80, stdev=1654.88 00:15:31.137 clat (usec): min=897, max=5071, avg=1430.02, stdev=262.11 00:15:31.137 lat (usec): min=900, max=5074, avg=1433.52, stdev=262.51 00:15:31.137 clat percentiles (usec): 00:15:31.137 | 1.00th=[ 1012], 5.00th=[ 1090], 10.00th=[ 1139], 20.00th=[ 1205], 00:15:31.137 | 30.00th=[ 1254], 40.00th=[ 1319], 50.00th=[ 1401], 60.00th=[ 1467], 00:15:31.137 | 70.00th=[ 1549], 80.00th=[ 1631], 90.00th=[ 1778], 95.00th=[ 1909], 00:15:31.137 | 99.00th=[ 2180], 99.50th=[ 2311], 99.90th=[ 2638], 99.95th=[ 3097], 00:15:31.137 | 99.99th=[ 3392] 00:15:31.137 bw ( KiB/s): min=142848, max=188928, per=99.00%, avg=161530.44, stdev=19444.31, samples=9 00:15:31.137 iops : min=35712, max=47232, avg=40382.56, stdev=4861.12, samples=9 00:15:31.137 lat (usec) : 1000=0.63% 00:15:31.137 lat (msec) : 2=96.41%, 4=2.96%, 10=0.01% 00:15:31.137 cpu : usr=63.91%, sys=33.07%, ctx=26, majf=0, minf=1063 00:15:31.137 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:31.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.137 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:31.137 issued rwts: total=204031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:31.137 00:15:31.137 Run status group 0 (all jobs): 00:15:31.137 READ: bw=159MiB/s (167MB/s), 159MiB/s-159MiB/s (167MB/s-167MB/s), io=797MiB (836MB), run=5002-5002msec 00:15:31.712 ----------------------------------------------------- 00:15:31.712 Suppressions used: 00:15:31.712 count bytes template 00:15:31.712 1 11 /usr/src/fio/parse.c 00:15:31.712 1 8 libtcmalloc_minimal.so 00:15:31.712 1 904 libcrypto.so 00:15:31.712 ----------------------------------------------------- 00:15:31.712 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:31.712 10:21:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:31.712 { 00:15:31.712 "subsystems": [ 00:15:31.712 { 00:15:31.712 "subsystem": "bdev", 00:15:31.712 "config": [ 00:15:31.712 { 00:15:31.712 "params": { 00:15:31.712 "io_mechanism": "io_uring_cmd", 00:15:31.712 "conserve_cpu": true, 00:15:31.712 "filename": "/dev/ng0n1", 00:15:31.712 "name": "xnvme_bdev" 00:15:31.712 }, 00:15:31.712 "method": "bdev_xnvme_create" 00:15:31.712 }, 00:15:31.712 { 00:15:31.712 "method": "bdev_wait_for_examine" 00:15:31.712 } 00:15:31.712 ] 00:15:31.712 } 00:15:31.712 ] 00:15:31.712 } 00:15:31.712 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:31.712 fio-3.35 00:15:31.712 Starting 1 thread 00:15:38.300 00:15:38.300 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82996: Fri Nov 29 10:21:16 2024 00:15:38.300 write: IOPS=38.8k, BW=151MiB/s (159MB/s)(758MiB/5002msec); 0 zone resets 00:15:38.300 slat (usec): min=2, max=538, avg= 4.24, stdev= 3.68 00:15:38.300 clat (usec): min=408, max=6703, avg=1487.24, stdev=324.77 00:15:38.300 lat (usec): min=411, max=6710, avg=1491.48, stdev=325.20 00:15:38.300 clat percentiles (usec): 00:15:38.300 | 1.00th=[ 906], 5.00th=[ 1057], 10.00th=[ 1139], 20.00th=[ 1237], 00:15:38.300 | 30.00th=[ 1303], 40.00th=[ 1385], 50.00th=[ 1450], 60.00th=[ 1532], 00:15:38.300 | 70.00th=[ 1614], 80.00th=[ 1713], 90.00th=[ 1860], 95.00th=[ 2024], 00:15:38.300 | 99.00th=[ 2409], 99.50th=[ 2671], 99.90th=[ 3982], 99.95th=[ 4490], 00:15:38.300 | 99.99th=[ 5669] 00:15:38.300 bw ( KiB/s): min=142896, max=182704, per=100.00%, avg=155578.67, stdev=11280.36, samples=9 00:15:38.300 iops : min=35724, max=45676, avg=38894.67, stdev=2820.09, samples=9 00:15:38.300 lat (usec) : 500=0.01%, 750=0.12%, 1000=2.48% 00:15:38.300 lat (msec) : 2=92.01%, 4=5.28%, 10=0.09% 00:15:38.300 cpu : usr=48.51%, sys=42.71%, ctx=23, majf=0, minf=1064 00:15:38.300 IO depths : 1=1.2%, 2=2.6%, 4=5.5%, 8=11.8%, 16=24.8%, 32=52.3%, >=64=1.8% 00:15:38.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.300 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:38.300 issued rwts: total=0,193922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.300 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:38.300 00:15:38.300 Run status group 0 (all jobs): 00:15:38.300 WRITE: bw=151MiB/s (159MB/s), 151MiB/s-151MiB/s (159MB/s-159MB/s), io=758MiB (794MB), run=5002-5002msec 00:15:38.300 ----------------------------------------------------- 00:15:38.300 Suppressions used: 00:15:38.300 count bytes template 00:15:38.300 1 11 /usr/src/fio/parse.c 00:15:38.300 1 8 libtcmalloc_minimal.so 00:15:38.300 1 904 libcrypto.so 00:15:38.300 ----------------------------------------------------- 00:15:38.301 00:15:38.301 00:15:38.301 real 0m12.069s 00:15:38.301 user 0m6.773s 00:15:38.301 sys 0m4.395s 00:15:38.301 10:21:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.301 ************************************ 00:15:38.301 END TEST xnvme_fio_plugin 00:15:38.301 10:21:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:38.301 ************************************ 00:15:38.301 10:21:17 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 82555 00:15:38.301 10:21:17 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 82555 ']' 00:15:38.301 10:21:17 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 82555 00:15:38.301 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82555) - No such process 00:15:38.301 Process with pid 82555 is not found 00:15:38.301 10:21:17 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 82555 is not found' 00:15:38.301 00:15:38.301 real 2m57.839s 00:15:38.301 user 1m24.715s 00:15:38.301 sys 1m18.357s 00:15:38.301 ************************************ 00:15:38.301 END TEST nvme_xnvme 00:15:38.301 ************************************ 00:15:38.301 10:21:17 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.301 10:21:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:38.301 10:21:17 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:38.301 10:21:17 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:38.301 10:21:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.301 10:21:17 -- common/autotest_common.sh@10 -- # set +x 00:15:38.301 ************************************ 00:15:38.301 START TEST blockdev_xnvme 00:15:38.301 ************************************ 00:15:38.301 10:21:17 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:38.301 * Looking for test storage... 00:15:38.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:38.301 10:21:17 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:38.301 10:21:17 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:38.301 10:21:17 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:38.301 10:21:17 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:38.301 10:21:17 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:38.301 10:21:17 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:38.301 10:21:17 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:38.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.301 --rc genhtml_branch_coverage=1 00:15:38.301 --rc genhtml_function_coverage=1 00:15:38.301 --rc genhtml_legend=1 00:15:38.301 --rc geninfo_all_blocks=1 00:15:38.301 --rc geninfo_unexecuted_blocks=1 00:15:38.301 00:15:38.301 ' 00:15:38.301 10:21:17 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:38.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.301 --rc genhtml_branch_coverage=1 00:15:38.301 --rc genhtml_function_coverage=1 00:15:38.301 --rc genhtml_legend=1 00:15:38.301 --rc geninfo_all_blocks=1 00:15:38.301 --rc geninfo_unexecuted_blocks=1 00:15:38.301 00:15:38.301 ' 00:15:38.301 10:21:17 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:38.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.301 --rc genhtml_branch_coverage=1 00:15:38.301 --rc genhtml_function_coverage=1 00:15:38.301 --rc genhtml_legend=1 00:15:38.301 --rc geninfo_all_blocks=1 00:15:38.301 --rc geninfo_unexecuted_blocks=1 00:15:38.301 00:15:38.301 ' 00:15:38.301 10:21:17 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:38.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.301 --rc genhtml_branch_coverage=1 00:15:38.301 --rc genhtml_function_coverage=1 00:15:38.301 --rc genhtml_legend=1 00:15:38.301 --rc geninfo_all_blocks=1 00:15:38.301 --rc geninfo_unexecuted_blocks=1 00:15:38.301 00:15:38.301 ' 00:15:38.301 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:38.301 10:21:17 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:38.301 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:38.301 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:38.301 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:38.301 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:38.301 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:38.301 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:38.301 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:38.301 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:38.301 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=83129 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 83129 00:15:38.302 10:21:17 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 83129 ']' 00:15:38.302 10:21:17 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.302 10:21:17 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:38.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.302 10:21:17 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.302 10:21:17 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:38.302 10:21:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:38.302 10:21:17 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:38.302 [2024-11-29 10:21:17.386085] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:38.302 [2024-11-29 10:21:17.386228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83129 ] 00:15:38.302 [2024-11-29 10:21:17.534462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.302 [2024-11-29 10:21:17.564079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.874 10:21:18 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.874 10:21:18 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:38.874 10:21:18 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:38.874 10:21:18 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:38.874 10:21:18 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:38.874 10:21:18 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:38.874 10:21:18 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:39.447 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:40.020 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:40.020 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:40.020 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:40.020 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:40.020 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0c0n1 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0c0n1 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:40.020 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n2 ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n3 ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring -c' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:40.021 nvme0n1 00:15:40.021 nvme1n1 00:15:40.021 nvme1n2 00:15:40.021 nvme1n3 00:15:40.021 nvme2n1 00:15:40.021 nvme3n1 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.021 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.021 10:21:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.283 10:21:19 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.283 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:40.283 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:40.283 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:40.283 10:21:19 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.283 10:21:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.283 10:21:19 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.283 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:40.283 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:40.284 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "21b73923-7d93-4c10-9cd7-44f1743b7da1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "21b73923-7d93-4c10-9cd7-44f1743b7da1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "c3e1d98e-7654-4136-92d5-a92330767984"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c3e1d98e-7654-4136-92d5-a92330767984",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "d51b6358-83fa-4ff3-9be1-70c3d34c9549"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d51b6358-83fa-4ff3-9be1-70c3d34c9549",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "adb62124-9363-4959-b45b-439028cbd06f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "adb62124-9363-4959-b45b-439028cbd06f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f82aeb2f-cf12-4efb-a1df-6ef01c57b8b8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f82aeb2f-cf12-4efb-a1df-6ef01c57b8b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "e38d9bb5-4373-4c9d-bebf-56156f184b55"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e38d9bb5-4373-4c9d-bebf-56156f184b55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:40.284 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:40.284 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:40.284 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:40.284 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 83129 00:15:40.284 10:21:19 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 83129 ']' 00:15:40.284 10:21:19 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 83129 00:15:40.284 10:21:19 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:40.284 10:21:19 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.284 10:21:19 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83129 00:15:40.284 10:21:19 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.284 killing process with pid 83129 00:15:40.284 10:21:19 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.284 10:21:19 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83129' 00:15:40.284 10:21:19 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 83129 00:15:40.284 10:21:19 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 83129 00:15:40.545 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:40.545 10:21:19 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:40.545 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:40.545 10:21:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.545 10:21:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.545 ************************************ 00:15:40.545 START TEST bdev_hello_world 00:15:40.545 ************************************ 00:15:40.545 10:21:19 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:40.545 [2024-11-29 10:21:19.990895] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:40.545 [2024-11-29 10:21:19.991037] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83398 ] 00:15:40.807 [2024-11-29 10:21:20.138250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.807 [2024-11-29 10:21:20.168465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.068 [2024-11-29 10:21:20.397677] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:41.068 [2024-11-29 10:21:20.397762] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:41.068 [2024-11-29 10:21:20.397791] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:41.068 [2024-11-29 10:21:20.400076] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:41.068 [2024-11-29 10:21:20.400881] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:41.068 [2024-11-29 10:21:20.400934] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:41.068 [2024-11-29 10:21:20.401435] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:41.068 00:15:41.068 [2024-11-29 10:21:20.401477] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:41.330 00:15:41.330 real 0m0.666s 00:15:41.330 user 0m0.326s 00:15:41.330 sys 0m0.195s 00:15:41.330 10:21:20 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.330 ************************************ 00:15:41.330 END TEST bdev_hello_world 00:15:41.330 ************************************ 00:15:41.330 10:21:20 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:41.330 10:21:20 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:41.330 10:21:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:41.330 10:21:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.330 10:21:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.330 ************************************ 00:15:41.330 START TEST bdev_bounds 00:15:41.330 ************************************ 00:15:41.330 10:21:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:41.330 10:21:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=83423 00:15:41.330 10:21:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:41.330 Process bdevio pid: 83423 00:15:41.330 10:21:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 83423' 00:15:41.330 10:21:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 83423 00:15:41.330 10:21:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 83423 ']' 00:15:41.330 10:21:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:41.330 10:21:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.330 10:21:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.330 10:21:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.330 10:21:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.330 10:21:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:41.330 [2024-11-29 10:21:20.733052] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:41.330 [2024-11-29 10:21:20.733197] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83423 ] 00:15:41.592 [2024-11-29 10:21:20.881412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:41.592 [2024-11-29 10:21:20.913365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.592 [2024-11-29 10:21:20.913594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.592 [2024-11-29 10:21:20.913686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.165 10:21:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.165 10:21:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:42.165 10:21:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:42.428 I/O targets: 00:15:42.428 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:42.428 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:42.428 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:42.428 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:42.428 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:42.428 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:42.428 00:15:42.428 00:15:42.428 CUnit - A unit testing framework for C - Version 2.1-3 00:15:42.428 http://cunit.sourceforge.net/ 00:15:42.428 00:15:42.428 00:15:42.428 Suite: bdevio tests on: nvme3n1 00:15:42.428 Test: blockdev write read block ...passed 00:15:42.428 Test: blockdev write zeroes read block ...passed 00:15:42.428 Test: blockdev write zeroes read no split ...passed 00:15:42.428 Test: blockdev write zeroes read split ...passed 00:15:42.428 Test: blockdev write zeroes read split partial ...passed 00:15:42.428 Test: blockdev reset ...passed 00:15:42.428 Test: blockdev write read 8 blocks ...passed 00:15:42.428 Test: blockdev write read size > 128k ...passed 00:15:42.428 Test: blockdev write read invalid size ...passed 00:15:42.428 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.428 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.428 Test: blockdev write read max offset ...passed 00:15:42.428 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.428 Test: blockdev writev readv 8 blocks ...passed 00:15:42.428 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.428 Test: blockdev writev readv block ...passed 00:15:42.428 Test: blockdev writev readv size > 128k ...passed 00:15:42.428 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.428 Test: blockdev comparev and writev ...passed 00:15:42.428 Test: blockdev nvme passthru rw ...passed 00:15:42.428 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.428 Test: blockdev nvme admin passthru ...passed 00:15:42.428 Test: blockdev copy ...passed 00:15:42.428 Suite: bdevio tests on: nvme2n1 00:15:42.428 Test: blockdev write read block ...passed 00:15:42.428 Test: blockdev write zeroes read block ...passed 00:15:42.428 Test: blockdev write zeroes read no split ...passed 00:15:42.428 Test: blockdev write zeroes read split ...passed 00:15:42.428 Test: blockdev write zeroes read split partial ...passed 00:15:42.428 Test: blockdev reset ...passed 00:15:42.428 Test: blockdev write read 8 blocks ...passed 00:15:42.428 Test: blockdev write read size > 128k ...passed 00:15:42.428 Test: blockdev write read invalid size ...passed 00:15:42.428 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.428 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.428 Test: blockdev write read max offset ...passed 00:15:42.428 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.428 Test: blockdev writev readv 8 blocks ...passed 00:15:42.428 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.428 Test: blockdev writev readv block ...passed 00:15:42.428 Test: blockdev writev readv size > 128k ...passed 00:15:42.428 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.428 Test: blockdev comparev and writev ...passed 00:15:42.428 Test: blockdev nvme passthru rw ...passed 00:15:42.428 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.428 Test: blockdev nvme admin passthru ...passed 00:15:42.428 Test: blockdev copy ...passed 00:15:42.428 Suite: bdevio tests on: nvme1n3 00:15:42.428 Test: blockdev write read block ...passed 00:15:42.428 Test: blockdev write zeroes read block ...passed 00:15:42.428 Test: blockdev write zeroes read no split ...passed 00:15:42.428 Test: blockdev write zeroes read split ...passed 00:15:42.428 Test: blockdev write zeroes read split partial ...passed 00:15:42.428 Test: blockdev reset ...passed 00:15:42.428 Test: blockdev write read 8 blocks ...passed 00:15:42.428 Test: blockdev write read size > 128k ...passed 00:15:42.428 Test: blockdev write read invalid size ...passed 00:15:42.428 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.428 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.428 Test: blockdev write read max offset ...passed 00:15:42.428 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.428 Test: blockdev writev readv 8 blocks ...passed 00:15:42.428 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.428 Test: blockdev writev readv block ...passed 00:15:42.428 Test: blockdev writev readv size > 128k ...passed 00:15:42.428 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.428 Test: blockdev comparev and writev ...passed 00:15:42.428 Test: blockdev nvme passthru rw ...passed 00:15:42.428 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.428 Test: blockdev nvme admin passthru ...passed 00:15:42.428 Test: blockdev copy ...passed 00:15:42.428 Suite: bdevio tests on: nvme1n2 00:15:42.428 Test: blockdev write read block ...passed 00:15:42.428 Test: blockdev write zeroes read block ...passed 00:15:42.428 Test: blockdev write zeroes read no split ...passed 00:15:42.428 Test: blockdev write zeroes read split ...passed 00:15:42.428 Test: blockdev write zeroes read split partial ...passed 00:15:42.428 Test: blockdev reset ...passed 00:15:42.428 Test: blockdev write read 8 blocks ...passed 00:15:42.428 Test: blockdev write read size > 128k ...passed 00:15:42.428 Test: blockdev write read invalid size ...passed 00:15:42.428 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.428 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.428 Test: blockdev write read max offset ...passed 00:15:42.428 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.428 Test: blockdev writev readv 8 blocks ...passed 00:15:42.428 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.428 Test: blockdev writev readv block ...passed 00:15:42.428 Test: blockdev writev readv size > 128k ...passed 00:15:42.428 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.428 Test: blockdev comparev and writev ...passed 00:15:42.428 Test: blockdev nvme passthru rw ...passed 00:15:42.428 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.428 Test: blockdev nvme admin passthru ...passed 00:15:42.428 Test: blockdev copy ...passed 00:15:42.428 Suite: bdevio tests on: nvme1n1 00:15:42.428 Test: blockdev write read block ...passed 00:15:42.428 Test: blockdev write zeroes read block ...passed 00:15:42.690 Test: blockdev write zeroes read no split ...passed 00:15:42.690 Test: blockdev write zeroes read split ...passed 00:15:42.690 Test: blockdev write zeroes read split partial ...passed 00:15:42.690 Test: blockdev reset ...passed 00:15:42.690 Test: blockdev write read 8 blocks ...passed 00:15:42.690 Test: blockdev write read size > 128k ...passed 00:15:42.690 Test: blockdev write read invalid size ...passed 00:15:42.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.690 Test: blockdev write read max offset ...passed 00:15:42.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.690 Test: blockdev writev readv 8 blocks ...passed 00:15:42.690 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.690 Test: blockdev writev readv block ...passed 00:15:42.690 Test: blockdev writev readv size > 128k ...passed 00:15:42.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.690 Test: blockdev comparev and writev ...passed 00:15:42.690 Test: blockdev nvme passthru rw ...passed 00:15:42.690 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.690 Test: blockdev nvme admin passthru ...passed 00:15:42.690 Test: blockdev copy ...passed 00:15:42.690 Suite: bdevio tests on: nvme0n1 00:15:42.690 Test: blockdev write read block ...passed 00:15:42.690 Test: blockdev write zeroes read block ...passed 00:15:42.690 Test: blockdev write zeroes read no split ...passed 00:15:42.690 Test: blockdev write zeroes read split ...passed 00:15:42.690 Test: blockdev write zeroes read split partial ...passed 00:15:42.690 Test: blockdev reset ...passed 00:15:42.690 Test: blockdev write read 8 blocks ...passed 00:15:42.690 Test: blockdev write read size > 128k ...passed 00:15:42.690 Test: blockdev write read invalid size ...passed 00:15:42.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.690 Test: blockdev write read max offset ...passed 00:15:42.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.690 Test: blockdev writev readv 8 blocks ...passed 00:15:42.690 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.690 Test: blockdev writev readv block ...passed 00:15:42.690 Test: blockdev writev readv size > 128k ...passed 00:15:42.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.690 Test: blockdev comparev and writev ...passed 00:15:42.690 Test: blockdev nvme passthru rw ...passed 00:15:42.690 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.690 Test: blockdev nvme admin passthru ...passed 00:15:42.690 Test: blockdev copy ...passed 00:15:42.690 00:15:42.690 Run Summary: Type Total Ran Passed Failed Inactive 00:15:42.690 suites 6 6 n/a 0 0 00:15:42.690 tests 138 138 138 0 0 00:15:42.690 asserts 780 780 780 0 n/a 00:15:42.690 00:15:42.690 Elapsed time = 0.620 seconds 00:15:42.690 0 00:15:42.690 10:21:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 83423 00:15:42.690 10:21:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 83423 ']' 00:15:42.690 10:21:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 83423 00:15:42.690 10:21:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:42.690 10:21:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.690 10:21:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83423 00:15:42.690 10:21:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.690 10:21:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.690 10:21:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83423' 00:15:42.690 killing process with pid 83423 00:15:42.690 10:21:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 83423 00:15:42.690 10:21:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 83423 00:15:42.952 10:21:22 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:42.952 00:15:42.952 real 0m1.534s 00:15:42.952 user 0m3.748s 00:15:42.952 sys 0m0.355s 00:15:42.952 10:21:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.952 10:21:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:42.952 ************************************ 00:15:42.952 END TEST bdev_bounds 00:15:42.952 ************************************ 00:15:42.952 10:21:22 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:15:42.952 10:21:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:42.952 10:21:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.952 10:21:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:42.952 ************************************ 00:15:42.952 START TEST bdev_nbd 00:15:42.952 ************************************ 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=83472 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 83472 /var/tmp/spdk-nbd.sock 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 83472 ']' 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:42.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.952 10:21:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:42.952 [2024-11-29 10:21:22.338764] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:42.952 [2024-11-29 10:21:22.339123] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.214 [2024-11-29 10:21:22.487428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.214 [2024-11-29 10:21:22.517601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.788 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.788 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:43.788 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:15:43.788 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.788 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:43.788 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:43.788 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:15:43.788 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.788 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:43.788 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:43.788 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:43.788 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:43.788 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:43.788 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:43.788 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.050 1+0 records in 00:15:44.050 1+0 records out 00:15:44.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000945323 s, 4.3 MB/s 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.050 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.355 1+0 records in 00:15:44.355 1+0 records out 00:15:44.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000906463 s, 4.5 MB/s 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.355 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:15:44.623 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:44.623 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:44.623 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:44.623 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:44.623 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:44.623 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.623 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.623 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:44.623 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:44.623 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.623 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.623 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.623 1+0 records in 00:15:44.624 1+0 records out 00:15:44.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107151 s, 3.8 MB/s 00:15:44.624 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.624 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:44.624 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.624 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.624 10:21:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:44.624 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:44.624 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.624 10:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.885 1+0 records in 00:15:44.885 1+0 records out 00:15:44.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000929783 s, 4.4 MB/s 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.885 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.147 1+0 records in 00:15:45.147 1+0 records out 00:15:45.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127155 s, 3.2 MB/s 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:45.147 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.410 1+0 records in 00:15:45.410 1+0 records out 00:15:45.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000952393 s, 4.3 MB/s 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:45.410 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:45.673 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:45.673 { 00:15:45.673 "nbd_device": "/dev/nbd0", 00:15:45.673 "bdev_name": "nvme0n1" 00:15:45.673 }, 00:15:45.673 { 00:15:45.673 "nbd_device": "/dev/nbd1", 00:15:45.673 "bdev_name": "nvme1n1" 00:15:45.673 }, 00:15:45.673 { 00:15:45.673 "nbd_device": "/dev/nbd2", 00:15:45.673 "bdev_name": "nvme1n2" 00:15:45.673 }, 00:15:45.673 { 00:15:45.673 "nbd_device": "/dev/nbd3", 00:15:45.673 "bdev_name": "nvme1n3" 00:15:45.673 }, 00:15:45.673 { 00:15:45.673 "nbd_device": "/dev/nbd4", 00:15:45.673 "bdev_name": "nvme2n1" 00:15:45.673 }, 00:15:45.673 { 00:15:45.673 "nbd_device": "/dev/nbd5", 00:15:45.673 "bdev_name": "nvme3n1" 00:15:45.673 } 00:15:45.673 ]' 00:15:45.673 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:45.673 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:45.673 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:45.673 { 00:15:45.673 "nbd_device": "/dev/nbd0", 00:15:45.673 "bdev_name": "nvme0n1" 00:15:45.673 }, 00:15:45.673 { 00:15:45.673 "nbd_device": "/dev/nbd1", 00:15:45.673 "bdev_name": "nvme1n1" 00:15:45.673 }, 00:15:45.673 { 00:15:45.673 "nbd_device": "/dev/nbd2", 00:15:45.673 "bdev_name": "nvme1n2" 00:15:45.673 }, 00:15:45.673 { 00:15:45.673 "nbd_device": "/dev/nbd3", 00:15:45.673 "bdev_name": "nvme1n3" 00:15:45.673 }, 00:15:45.673 { 00:15:45.673 "nbd_device": "/dev/nbd4", 00:15:45.673 "bdev_name": "nvme2n1" 00:15:45.673 }, 00:15:45.673 { 00:15:45.673 "nbd_device": "/dev/nbd5", 00:15:45.673 "bdev_name": "nvme3n1" 00:15:45.673 } 00:15:45.673 ]' 00:15:45.673 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:45.673 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:45.673 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:45.673 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:45.673 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:45.673 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.673 10:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:45.935 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:45.935 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:45.935 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:45.935 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.935 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.935 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:45.935 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.935 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.935 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.935 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:46.196 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:46.196 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:46.196 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:46.196 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.196 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.196 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:46.196 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.196 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.196 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.196 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.458 10:21:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:46.720 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:46.720 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:46.720 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:46.720 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.720 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.720 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:46.720 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.720 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.720 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.720 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:46.982 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:46.982 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:46.982 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:46.982 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.982 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.982 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:46.982 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.982 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.982 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:46.982 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.982 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:47.244 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:47.506 /dev/nbd0 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.506 1+0 records in 00:15:47.506 1+0 records out 00:15:47.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000822732 s, 5.0 MB/s 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:47.506 10:21:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:47.768 /dev/nbd1 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.768 1+0 records in 00:15:47.768 1+0 records out 00:15:47.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000953643 s, 4.3 MB/s 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:47.768 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:15:48.029 /dev/nbd10 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.029 1+0 records in 00:15:48.029 1+0 records out 00:15:48.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703277 s, 5.8 MB/s 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:48.029 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:15:48.290 /dev/nbd11 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.290 1+0 records in 00:15:48.290 1+0 records out 00:15:48.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134873 s, 3.0 MB/s 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:48.290 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:48.552 /dev/nbd12 00:15:48.552 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:48.552 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:48.552 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:48.552 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:48.552 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.552 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.552 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:48.552 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:48.552 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.552 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.552 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.552 1+0 records in 00:15:48.552 1+0 records out 00:15:48.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135138 s, 3.0 MB/s 00:15:48.553 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.553 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:48.553 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.553 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.553 10:21:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:48.553 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.553 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:48.553 10:21:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:48.553 /dev/nbd13 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.814 1+0 records in 00:15:48.814 1+0 records out 00:15:48.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102954 s, 4.0 MB/s 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:48.814 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:48.814 { 00:15:48.814 "nbd_device": "/dev/nbd0", 00:15:48.814 "bdev_name": "nvme0n1" 00:15:48.814 }, 00:15:48.814 { 00:15:48.814 "nbd_device": "/dev/nbd1", 00:15:48.814 "bdev_name": "nvme1n1" 00:15:48.814 }, 00:15:48.814 { 00:15:48.814 "nbd_device": "/dev/nbd10", 00:15:48.814 "bdev_name": "nvme1n2" 00:15:48.814 }, 00:15:48.814 { 00:15:48.814 "nbd_device": "/dev/nbd11", 00:15:48.814 "bdev_name": "nvme1n3" 00:15:48.814 }, 00:15:48.814 { 00:15:48.814 "nbd_device": "/dev/nbd12", 00:15:48.814 "bdev_name": "nvme2n1" 00:15:48.814 }, 00:15:48.814 { 00:15:48.814 "nbd_device": "/dev/nbd13", 00:15:48.814 "bdev_name": "nvme3n1" 00:15:48.814 } 00:15:48.814 ]' 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:49.076 { 00:15:49.076 "nbd_device": "/dev/nbd0", 00:15:49.076 "bdev_name": "nvme0n1" 00:15:49.076 }, 00:15:49.076 { 00:15:49.076 "nbd_device": "/dev/nbd1", 00:15:49.076 "bdev_name": "nvme1n1" 00:15:49.076 }, 00:15:49.076 { 00:15:49.076 "nbd_device": "/dev/nbd10", 00:15:49.076 "bdev_name": "nvme1n2" 00:15:49.076 }, 00:15:49.076 { 00:15:49.076 "nbd_device": "/dev/nbd11", 00:15:49.076 "bdev_name": "nvme1n3" 00:15:49.076 }, 00:15:49.076 { 00:15:49.076 "nbd_device": "/dev/nbd12", 00:15:49.076 "bdev_name": "nvme2n1" 00:15:49.076 }, 00:15:49.076 { 00:15:49.076 "nbd_device": "/dev/nbd13", 00:15:49.076 "bdev_name": "nvme3n1" 00:15:49.076 } 00:15:49.076 ]' 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:49.076 /dev/nbd1 00:15:49.076 /dev/nbd10 00:15:49.076 /dev/nbd11 00:15:49.076 /dev/nbd12 00:15:49.076 /dev/nbd13' 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:49.076 /dev/nbd1 00:15:49.076 /dev/nbd10 00:15:49.076 /dev/nbd11 00:15:49.076 /dev/nbd12 00:15:49.076 /dev/nbd13' 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:49.076 256+0 records in 00:15:49.076 256+0 records out 00:15:49.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00715116 s, 147 MB/s 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:49.076 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:49.339 256+0 records in 00:15:49.339 256+0 records out 00:15:49.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238962 s, 4.4 MB/s 00:15:49.339 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:49.339 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:49.600 256+0 records in 00:15:49.600 256+0 records out 00:15:49.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.242481 s, 4.3 MB/s 00:15:49.600 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:49.600 10:21:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:49.600 256+0 records in 00:15:49.600 256+0 records out 00:15:49.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.219557 s, 4.8 MB/s 00:15:49.600 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:49.600 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:49.861 256+0 records in 00:15:49.861 256+0 records out 00:15:49.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187076 s, 5.6 MB/s 00:15:49.861 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:49.861 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:50.122 256+0 records in 00:15:50.122 256+0 records out 00:15:50.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.278705 s, 3.8 MB/s 00:15:50.122 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:50.122 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:50.384 256+0 records in 00:15:50.384 256+0 records out 00:15:50.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.204319 s, 5.1 MB/s 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.384 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:50.644 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:50.644 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:50.644 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:50.644 10:21:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.644 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.644 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:50.644 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:50.644 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.644 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.644 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:50.905 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:50.905 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:50.905 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:50.905 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.905 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.905 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:50.905 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:50.905 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.905 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.905 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:51.166 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:51.167 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:51.167 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:51.167 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.167 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.167 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:51.167 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.167 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.167 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.167 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.428 10:21:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:51.688 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:51.688 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:51.688 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:51.688 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.688 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.688 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:51.688 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.688 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.688 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:51.688 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:51.688 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:51.948 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:51.948 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:51.948 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:51.948 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:51.948 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:51.948 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:51.948 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:51.948 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:51.948 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:51.948 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:51.948 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:51.948 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:51.948 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:51.948 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:51.948 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:51.948 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:52.209 malloc_lvol_verify 00:15:52.209 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:52.469 6ca3319a-d698-4f8f-a4df-76b09a641631 00:15:52.469 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:52.469 e712e1d5-de89-45ac-9253-f78861192f35 00:15:52.730 10:21:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:52.730 /dev/nbd0 00:15:52.730 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:52.730 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:52.730 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:52.730 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:52.730 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:52.730 mke2fs 1.47.0 (5-Feb-2023) 00:15:52.730 Discarding device blocks: 0/4096 done 00:15:52.730 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:52.730 00:15:52.730 Allocating group tables: 0/1 done 00:15:52.730 Writing inode tables: 0/1 done 00:15:52.731 Creating journal (1024 blocks): done 00:15:52.731 Writing superblocks and filesystem accounting information: 0/1 done 00:15:52.731 00:15:52.731 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:52.731 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:52.731 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:52.731 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:52.731 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:52.731 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.731 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 83472 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 83472 ']' 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 83472 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83472 00:15:52.993 killing process with pid 83472 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83472' 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 83472 00:15:52.993 10:21:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 83472 00:15:53.255 10:21:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:53.255 00:15:53.255 real 0m10.325s 00:15:53.255 user 0m14.028s 00:15:53.255 sys 0m3.791s 00:15:53.255 ************************************ 00:15:53.255 END TEST bdev_nbd 00:15:53.255 ************************************ 00:15:53.255 10:21:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.255 10:21:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:53.255 10:21:32 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:53.255 10:21:32 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:53.255 10:21:32 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:53.255 10:21:32 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:53.255 10:21:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:53.255 10:21:32 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.255 10:21:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.255 ************************************ 00:15:53.255 START TEST bdev_fio 00:15:53.255 ************************************ 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:53.255 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n2]' 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n2 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n3]' 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n3 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.255 10:21:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:53.516 ************************************ 00:15:53.516 START TEST bdev_fio_rw_verify 00:15:53.516 ************************************ 00:15:53.516 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:53.516 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:53.516 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:53.516 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:53.516 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:53.516 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:53.516 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:53.516 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:53.516 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:53.516 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:53.516 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:53.516 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:53.516 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:53.517 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:53.517 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:53.517 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:53.517 10:21:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:53.517 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:53.517 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:53.517 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:53.517 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:53.517 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:53.517 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:53.517 fio-3.35 00:15:53.517 Starting 6 threads 00:16:05.751 00:16:05.751 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=83878: Fri Nov 29 10:21:43 2024 00:16:05.751 read: IOPS=13.9k, BW=54.2MiB/s (56.8MB/s)(542MiB/10004msec) 00:16:05.751 slat (usec): min=2, max=4171, avg= 6.79, stdev=20.57 00:16:05.751 clat (usec): min=84, max=38067, avg=1412.68, stdev=808.76 00:16:05.751 lat (usec): min=87, max=38080, avg=1419.48, stdev=809.52 00:16:05.751 clat percentiles (usec): 00:16:05.751 | 50.000th=[ 1319], 99.000th=[ 3785], 99.900th=[ 4948], 99.990th=[ 9765], 00:16:05.751 | 99.999th=[38011] 00:16:05.751 write: IOPS=14.1k, BW=55.1MiB/s (57.8MB/s)(551MiB/10004msec); 0 zone resets 00:16:05.751 slat (usec): min=13, max=6034, avg=42.55, stdev=146.23 00:16:05.751 clat (usec): min=82, max=9357, avg=1680.18, stdev=828.80 00:16:05.751 lat (usec): min=104, max=9387, avg=1722.73, stdev=841.93 00:16:05.751 clat percentiles (usec): 00:16:05.751 | 50.000th=[ 1549], 99.000th=[ 4228], 99.900th=[ 5669], 99.990th=[ 7242], 00:16:05.751 | 99.999th=[ 9372] 00:16:05.751 bw ( KiB/s): min=48957, max=73575, per=100.00%, avg=56707.37, stdev=1362.72, samples=114 00:16:05.751 iops : min=12237, max=18393, avg=14176.26, stdev=340.67, samples=114 00:16:05.751 lat (usec) : 100=0.01%, 250=1.29%, 500=5.19%, 750=7.85%, 1000=11.40% 00:16:05.751 lat (msec) : 2=50.37%, 4=22.85%, 10=1.04%, 50=0.01% 00:16:05.751 cpu : usr=44.50%, sys=30.39%, ctx=5081, majf=0, minf=16053 00:16:05.751 IO depths : 1=11.3%, 2=23.7%, 4=51.2%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.751 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.751 issued rwts: total=138825,141100,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.751 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:05.751 00:16:05.751 Run status group 0 (all jobs): 00:16:05.751 READ: bw=54.2MiB/s (56.8MB/s), 54.2MiB/s-54.2MiB/s (56.8MB/s-56.8MB/s), io=542MiB (569MB), run=10004-10004msec 00:16:05.751 WRITE: bw=55.1MiB/s (57.8MB/s), 55.1MiB/s-55.1MiB/s (57.8MB/s-57.8MB/s), io=551MiB (578MB), run=10004-10004msec 00:16:05.751 ----------------------------------------------------- 00:16:05.751 Suppressions used: 00:16:05.751 count bytes template 00:16:05.751 6 48 /usr/src/fio/parse.c 00:16:05.751 2196 210816 /usr/src/fio/iolog.c 00:16:05.751 1 8 libtcmalloc_minimal.so 00:16:05.751 1 904 libcrypto.so 00:16:05.751 ----------------------------------------------------- 00:16:05.751 00:16:05.751 00:16:05.751 real 0m11.145s 00:16:05.751 user 0m27.399s 00:16:05.751 sys 0m18.556s 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.751 ************************************ 00:16:05.751 END TEST bdev_fio_rw_verify 00:16:05.751 ************************************ 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:05.751 10:21:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:05.752 10:21:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "21b73923-7d93-4c10-9cd7-44f1743b7da1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "21b73923-7d93-4c10-9cd7-44f1743b7da1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "c3e1d98e-7654-4136-92d5-a92330767984"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c3e1d98e-7654-4136-92d5-a92330767984",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "d51b6358-83fa-4ff3-9be1-70c3d34c9549"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d51b6358-83fa-4ff3-9be1-70c3d34c9549",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "adb62124-9363-4959-b45b-439028cbd06f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "adb62124-9363-4959-b45b-439028cbd06f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f82aeb2f-cf12-4efb-a1df-6ef01c57b8b8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f82aeb2f-cf12-4efb-a1df-6ef01c57b8b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "e38d9bb5-4373-4c9d-bebf-56156f184b55"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e38d9bb5-4373-4c9d-bebf-56156f184b55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:05.752 10:21:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:05.752 10:21:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:05.752 /home/vagrant/spdk_repo/spdk 00:16:05.752 10:21:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:05.752 10:21:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:05.752 10:21:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:05.752 00:16:05.752 real 0m11.310s 00:16:05.752 user 0m27.466s 00:16:05.752 sys 0m18.631s 00:16:05.752 10:21:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.752 ************************************ 00:16:05.752 END TEST bdev_fio 00:16:05.752 ************************************ 00:16:05.752 10:21:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:05.752 10:21:44 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:05.752 10:21:44 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:05.752 10:21:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:05.752 10:21:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.752 10:21:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:05.752 ************************************ 00:16:05.752 START TEST bdev_verify 00:16:05.752 ************************************ 00:16:05.752 10:21:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:05.752 [2024-11-29 10:21:44.091615] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:05.752 [2024-11-29 10:21:44.091755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84044 ] 00:16:05.752 [2024-11-29 10:21:44.240337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:05.752 [2024-11-29 10:21:44.270348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.752 [2024-11-29 10:21:44.270429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.752 Running I/O for 5 seconds... 00:16:07.714 23616.00 IOPS, 92.25 MiB/s [2024-11-29T10:21:48.123Z] 23778.00 IOPS, 92.88 MiB/s [2024-11-29T10:21:49.066Z] 23958.67 IOPS, 93.59 MiB/s [2024-11-29T10:21:50.010Z] 23201.00 IOPS, 90.63 MiB/s [2024-11-29T10:21:50.010Z] 23251.20 IOPS, 90.83 MiB/s 00:16:10.545 Latency(us) 00:16:10.545 [2024-11-29T10:21:50.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.545 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:10.545 Verification LBA range: start 0x0 length 0x20000 00:16:10.545 nvme0n1 : 5.08 1865.03 7.29 0.00 0.00 68505.63 8368.44 68157.44 00:16:10.545 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:10.545 Verification LBA range: start 0x20000 length 0x20000 00:16:10.545 nvme0n1 : 5.04 1803.80 7.05 0.00 0.00 70838.52 7864.32 79046.50 00:16:10.545 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:10.545 Verification LBA range: start 0x0 length 0x80000 00:16:10.545 nvme1n1 : 5.04 1853.40 7.24 0.00 0.00 68802.40 9427.10 72190.42 00:16:10.545 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:10.545 Verification LBA range: start 0x80000 length 0x80000 00:16:10.545 nvme1n1 : 5.03 1806.85 7.06 0.00 0.00 70604.03 10284.11 72190.42 00:16:10.545 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:10.545 Verification LBA range: start 0x0 length 0x80000 00:16:10.545 nvme1n2 : 5.05 1851.59 7.23 0.00 0.00 68741.25 12300.60 69770.63 00:16:10.545 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:10.545 Verification LBA range: start 0x80000 length 0x80000 00:16:10.545 nvme1n2 : 5.05 1800.67 7.03 0.00 0.00 70708.91 10989.88 68157.44 00:16:10.545 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:10.545 Verification LBA range: start 0x0 length 0x80000 00:16:10.545 nvme1n3 : 5.08 1840.48 7.19 0.00 0.00 69046.79 8368.44 70173.93 00:16:10.545 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:10.545 Verification LBA range: start 0x80000 length 0x80000 00:16:10.545 nvme1n3 : 5.04 1803.02 7.04 0.00 0.00 70493.73 10586.58 70577.23 00:16:10.545 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:10.545 Verification LBA range: start 0x0 length 0xbd0bd 00:16:10.545 nvme2n1 : 5.09 2491.60 9.73 0.00 0.00 50893.40 4990.82 60898.07 00:16:10.545 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:10.545 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:10.545 nvme2n1 : 5.07 2533.31 9.90 0.00 0.00 50054.33 5444.53 60091.47 00:16:10.545 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:10.545 Verification LBA range: start 0x0 length 0xa0000 00:16:10.545 nvme3n1 : 5.07 1741.24 6.80 0.00 0.00 72590.66 8822.15 94371.84 00:16:10.545 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:10.545 Verification LBA range: start 0xa0000 length 0xa0000 00:16:10.545 nvme3n1 : 5.07 1615.36 6.31 0.00 0.00 78445.16 7007.31 99614.72 00:16:10.545 [2024-11-29T10:21:50.010Z] =================================================================================================================== 00:16:10.545 [2024-11-29T10:21:50.010Z] Total : 23006.33 89.87 0.00 0.00 66322.76 4990.82 99614.72 00:16:10.545 00:16:10.545 real 0m5.867s 00:16:10.545 user 0m9.240s 00:16:10.545 sys 0m1.555s 00:16:10.545 10:21:49 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.545 10:21:49 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:10.545 ************************************ 00:16:10.545 END TEST bdev_verify 00:16:10.545 ************************************ 00:16:10.545 10:21:49 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:10.545 10:21:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:10.545 10:21:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.545 10:21:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:10.545 ************************************ 00:16:10.545 START TEST bdev_verify_big_io 00:16:10.545 ************************************ 00:16:10.545 10:21:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:10.806 [2024-11-29 10:21:50.037369] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:10.806 [2024-11-29 10:21:50.037510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84134 ] 00:16:10.806 [2024-11-29 10:21:50.185495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:10.806 [2024-11-29 10:21:50.215880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.806 [2024-11-29 10:21:50.215945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.067 Running I/O for 5 seconds... 00:16:17.221 2096.00 IOPS, 131.00 MiB/s [2024-11-29T10:21:57.259Z] 2469.50 IOPS, 154.34 MiB/s [2024-11-29T10:21:57.259Z] 3127.67 IOPS, 195.48 MiB/s 00:16:17.794 Latency(us) 00:16:17.794 [2024-11-29T10:21:57.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.794 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:17.794 Verification LBA range: start 0x0 length 0x2000 00:16:17.794 nvme0n1 : 5.77 77.62 4.85 0.00 0.00 1585268.13 99614.72 1729343.80 00:16:17.794 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:17.794 Verification LBA range: start 0x2000 length 0x2000 00:16:17.794 nvme0n1 : 5.67 120.87 7.55 0.00 0.00 1020091.65 89935.56 1426063.36 00:16:17.794 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:17.794 Verification LBA range: start 0x0 length 0x8000 00:16:17.794 nvme1n1 : 5.78 88.59 5.54 0.00 0.00 1318849.97 25811.10 1393799.48 00:16:17.794 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:17.794 Verification LBA range: start 0x8000 length 0x8000 00:16:17.794 nvme1n1 : 5.67 108.97 6.81 0.00 0.00 1099703.54 63317.86 2284282.49 00:16:17.794 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:17.794 Verification LBA range: start 0x0 length 0x8000 00:16:17.794 nvme1n2 : 5.89 88.26 5.52 0.00 0.00 1251142.61 6024.27 1187310.67 00:16:17.795 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:17.795 Verification LBA range: start 0x8000 length 0x8000 00:16:17.795 nvme1n2 : 5.84 132.98 8.31 0.00 0.00 880378.19 87515.77 1348630.06 00:16:17.795 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:17.795 Verification LBA range: start 0x0 length 0x8000 00:16:17.795 nvme1n3 : 6.10 99.68 6.23 0.00 0.00 1060322.57 36095.21 1529307.77 00:16:17.795 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:17.795 Verification LBA range: start 0x8000 length 0x8000 00:16:17.795 nvme1n3 : 5.68 118.70 7.42 0.00 0.00 963313.85 6326.74 2374621.34 00:16:17.795 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:17.795 Verification LBA range: start 0x0 length 0xbd0b 00:16:17.795 nvme2n1 : 6.19 155.04 9.69 0.00 0.00 649234.96 7561.85 1148594.02 00:16:17.795 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:17.795 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:17.795 nvme2n1 : 5.77 167.38 10.46 0.00 0.00 669028.68 9527.93 1032444.06 00:16:17.795 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:17.795 Verification LBA range: start 0x0 length 0xa000 00:16:17.795 nvme3n1 : 6.44 252.23 15.76 0.00 0.00 382430.47 869.61 2787598.97 00:16:17.795 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:17.795 Verification LBA range: start 0xa000 length 0xa000 00:16:17.795 nvme3n1 : 5.84 175.27 10.95 0.00 0.00 619312.61 1109.07 1103424.59 00:16:17.795 [2024-11-29T10:21:57.260Z] =================================================================================================================== 00:16:17.795 [2024-11-29T10:21:57.260Z] Total : 1585.59 99.10 0.00 0.00 840821.95 869.61 2787598.97 00:16:17.795 00:16:17.795 real 0m7.247s 00:16:17.795 user 0m13.337s 00:16:17.795 sys 0m0.445s 00:16:17.795 10:21:57 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.795 ************************************ 00:16:17.795 10:21:57 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.795 END TEST bdev_verify_big_io 00:16:17.795 ************************************ 00:16:18.057 10:21:57 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:18.057 10:21:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:18.057 10:21:57 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.057 10:21:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:18.057 ************************************ 00:16:18.057 START TEST bdev_write_zeroes 00:16:18.057 ************************************ 00:16:18.057 10:21:57 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:18.057 [2024-11-29 10:21:57.352700] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:18.057 [2024-11-29 10:21:57.352861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84238 ] 00:16:18.057 [2024-11-29 10:21:57.500602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.319 [2024-11-29 10:21:57.529161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.582 Running I/O for 1 seconds... 00:16:19.525 74016.00 IOPS, 289.12 MiB/s 00:16:19.525 Latency(us) 00:16:19.525 [2024-11-29T10:21:58.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.525 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:19.526 nvme0n1 : 1.02 12276.71 47.96 0.00 0.00 10415.58 5721.80 21878.94 00:16:19.526 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:19.526 nvme1n1 : 1.02 12210.77 47.70 0.00 0.00 10461.51 6654.42 24702.03 00:16:19.526 Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:19.526 nvme1n2 : 1.02 11692.74 45.67 0.00 0.00 10916.16 7511.43 30247.38 00:16:19.526 Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:19.526 nvme1n3 : 1.02 12181.90 47.59 0.00 0.00 10463.79 6856.07 24903.68 00:16:19.526 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:19.526 nvme2n1 : 1.03 13000.41 50.78 0.00 0.00 9795.89 4940.41 18148.43 00:16:19.526 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:19.526 nvme3n1 : 1.02 12165.99 47.52 0.00 0.00 10400.96 4108.60 21979.77 00:16:19.526 [2024-11-29T10:21:58.991Z] =================================================================================================================== 00:16:19.526 [2024-11-29T10:21:58.991Z] Total : 73528.52 287.22 0.00 0.00 10397.95 4108.60 30247.38 00:16:19.787 00:16:19.787 real 0m1.797s 00:16:19.787 user 0m1.107s 00:16:19.787 sys 0m0.495s 00:16:19.787 10:21:59 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.787 ************************************ 00:16:19.787 END TEST bdev_write_zeroes 00:16:19.787 ************************************ 00:16:19.787 10:21:59 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:19.787 10:21:59 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:19.788 10:21:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:19.788 10:21:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:19.788 10:21:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:19.788 ************************************ 00:16:19.788 START TEST bdev_json_nonenclosed 00:16:19.788 ************************************ 00:16:19.788 10:21:59 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:19.788 [2024-11-29 10:21:59.223944] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:19.788 [2024-11-29 10:21:59.224081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84275 ] 00:16:20.080 [2024-11-29 10:21:59.369081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.080 [2024-11-29 10:21:59.400346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.080 [2024-11-29 10:21:59.400460] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:20.080 [2024-11-29 10:21:59.400478] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:20.080 [2024-11-29 10:21:59.400493] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:20.080 00:16:20.080 real 0m0.322s 00:16:20.080 user 0m0.132s 00:16:20.080 sys 0m0.086s 00:16:20.080 10:21:59 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.080 ************************************ 00:16:20.080 END TEST bdev_json_nonenclosed 00:16:20.080 ************************************ 00:16:20.080 10:21:59 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:20.431 10:21:59 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:20.431 10:21:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:20.431 10:21:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.431 10:21:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:20.431 ************************************ 00:16:20.431 START TEST bdev_json_nonarray 00:16:20.431 ************************************ 00:16:20.431 10:21:59 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:20.431 [2024-11-29 10:21:59.618473] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:20.431 [2024-11-29 10:21:59.618620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84300 ] 00:16:20.431 [2024-11-29 10:21:59.767625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.431 [2024-11-29 10:21:59.797042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.431 [2024-11-29 10:21:59.797157] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:20.431 [2024-11-29 10:21:59.797178] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:20.431 [2024-11-29 10:21:59.797193] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:20.431 00:16:20.431 real 0m0.325s 00:16:20.431 user 0m0.123s 00:16:20.431 sys 0m0.097s 00:16:20.431 10:21:59 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.431 ************************************ 00:16:20.431 END TEST bdev_json_nonarray 00:16:20.431 ************************************ 00:16:20.431 10:21:59 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:20.693 10:21:59 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:20.693 10:21:59 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:20.693 10:21:59 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:20.693 10:21:59 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:20.693 10:21:59 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:20.693 10:21:59 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:20.693 10:21:59 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:20.693 10:21:59 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:20.693 10:21:59 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:20.693 10:21:59 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:20.693 10:21:59 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:20.693 10:21:59 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:21.266 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:26.564 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:26.564 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:27.508 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:27.508 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:27.508 00:16:27.508 real 0m49.704s 00:16:27.508 user 1m13.487s 00:16:27.508 sys 0m40.560s 00:16:27.508 10:22:06 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.508 ************************************ 00:16:27.508 END TEST blockdev_xnvme 00:16:27.508 ************************************ 00:16:27.508 10:22:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.508 10:22:06 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:27.508 10:22:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:27.508 10:22:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.508 10:22:06 -- common/autotest_common.sh@10 -- # set +x 00:16:27.508 ************************************ 00:16:27.508 START TEST ublk 00:16:27.508 ************************************ 00:16:27.508 10:22:06 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:27.508 * Looking for test storage... 00:16:27.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:27.770 10:22:06 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:27.770 10:22:06 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:27.770 10:22:06 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:27.770 10:22:07 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:27.770 10:22:07 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:27.770 10:22:07 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:27.770 10:22:07 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:27.770 10:22:07 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.770 10:22:07 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:27.770 10:22:07 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:27.770 10:22:07 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:27.770 10:22:07 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:27.770 10:22:07 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:27.770 10:22:07 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:27.770 10:22:07 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:27.770 10:22:07 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:27.770 10:22:07 ublk -- scripts/common.sh@345 -- # : 1 00:16:27.770 10:22:07 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:27.770 10:22:07 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.770 10:22:07 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:27.770 10:22:07 ublk -- scripts/common.sh@353 -- # local d=1 00:16:27.770 10:22:07 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.770 10:22:07 ublk -- scripts/common.sh@355 -- # echo 1 00:16:27.770 10:22:07 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:27.770 10:22:07 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:27.770 10:22:07 ublk -- scripts/common.sh@353 -- # local d=2 00:16:27.770 10:22:07 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.770 10:22:07 ublk -- scripts/common.sh@355 -- # echo 2 00:16:27.770 10:22:07 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:27.770 10:22:07 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:27.770 10:22:07 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:27.770 10:22:07 ublk -- scripts/common.sh@368 -- # return 0 00:16:27.770 10:22:07 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.770 10:22:07 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:27.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.770 --rc genhtml_branch_coverage=1 00:16:27.770 --rc genhtml_function_coverage=1 00:16:27.770 --rc genhtml_legend=1 00:16:27.770 --rc geninfo_all_blocks=1 00:16:27.770 --rc geninfo_unexecuted_blocks=1 00:16:27.770 00:16:27.770 ' 00:16:27.770 10:22:07 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:27.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.770 --rc genhtml_branch_coverage=1 00:16:27.770 --rc genhtml_function_coverage=1 00:16:27.770 --rc genhtml_legend=1 00:16:27.770 --rc geninfo_all_blocks=1 00:16:27.770 --rc geninfo_unexecuted_blocks=1 00:16:27.770 00:16:27.770 ' 00:16:27.770 10:22:07 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:27.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.770 --rc genhtml_branch_coverage=1 00:16:27.770 --rc genhtml_function_coverage=1 00:16:27.770 --rc genhtml_legend=1 00:16:27.770 --rc geninfo_all_blocks=1 00:16:27.770 --rc geninfo_unexecuted_blocks=1 00:16:27.770 00:16:27.770 ' 00:16:27.770 10:22:07 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:27.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.770 --rc genhtml_branch_coverage=1 00:16:27.770 --rc genhtml_function_coverage=1 00:16:27.770 --rc genhtml_legend=1 00:16:27.770 --rc geninfo_all_blocks=1 00:16:27.770 --rc geninfo_unexecuted_blocks=1 00:16:27.770 00:16:27.770 ' 00:16:27.770 10:22:07 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:27.770 10:22:07 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:27.770 10:22:07 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:27.770 10:22:07 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:27.770 10:22:07 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:27.770 10:22:07 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:27.770 10:22:07 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:27.770 10:22:07 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:27.770 10:22:07 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:27.770 10:22:07 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:27.770 10:22:07 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:27.770 10:22:07 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:27.770 10:22:07 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:27.770 10:22:07 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:27.770 10:22:07 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:27.770 10:22:07 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:27.770 10:22:07 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:27.770 10:22:07 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:27.770 10:22:07 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:27.770 10:22:07 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:27.770 10:22:07 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:27.770 10:22:07 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.770 10:22:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:27.770 ************************************ 00:16:27.770 START TEST test_save_ublk_config 00:16:27.770 ************************************ 00:16:27.770 10:22:07 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:27.770 10:22:07 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:27.771 10:22:07 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=84602 00:16:27.771 10:22:07 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:27.771 10:22:07 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:27.771 10:22:07 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 84602 00:16:27.771 10:22:07 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 84602 ']' 00:16:27.771 10:22:07 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.771 10:22:07 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.771 10:22:07 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.771 10:22:07 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.771 10:22:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:27.771 [2024-11-29 10:22:07.176786] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:27.771 [2024-11-29 10:22:07.176959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84602 ] 00:16:28.033 [2024-11-29 10:22:07.324492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.033 [2024-11-29 10:22:07.353476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.606 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.606 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:28.606 10:22:08 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:28.606 10:22:08 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:28.606 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.606 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:28.606 [2024-11-29 10:22:08.039842] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:28.606 [2024-11-29 10:22:08.040841] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:28.606 malloc0 00:16:28.867 [2024-11-29 10:22:08.073939] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:28.867 [2024-11-29 10:22:08.074026] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:28.867 [2024-11-29 10:22:08.074035] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:28.867 [2024-11-29 10:22:08.074050] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:28.867 [2024-11-29 10:22:08.081995] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:28.867 [2024-11-29 10:22:08.082036] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:28.867 [2024-11-29 10:22:08.089836] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:28.867 [2024-11-29 10:22:08.089958] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:28.867 [2024-11-29 10:22:08.113843] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:28.867 0 00:16:28.867 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.867 10:22:08 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:28.867 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.867 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:29.128 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.128 10:22:08 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:29.128 "subsystems": [ 00:16:29.128 { 00:16:29.128 "subsystem": "fsdev", 00:16:29.128 "config": [ 00:16:29.128 { 00:16:29.128 "method": "fsdev_set_opts", 00:16:29.128 "params": { 00:16:29.128 "fsdev_io_pool_size": 65535, 00:16:29.128 "fsdev_io_cache_size": 256 00:16:29.128 } 00:16:29.128 } 00:16:29.128 ] 00:16:29.128 }, 00:16:29.128 { 00:16:29.128 "subsystem": "keyring", 00:16:29.128 "config": [] 00:16:29.128 }, 00:16:29.128 { 00:16:29.128 "subsystem": "iobuf", 00:16:29.128 "config": [ 00:16:29.128 { 00:16:29.128 "method": "iobuf_set_options", 00:16:29.128 "params": { 00:16:29.128 "small_pool_count": 8192, 00:16:29.128 "large_pool_count": 1024, 00:16:29.128 "small_bufsize": 8192, 00:16:29.128 "large_bufsize": 135168, 00:16:29.128 "enable_numa": false 00:16:29.128 } 00:16:29.128 } 00:16:29.128 ] 00:16:29.128 }, 00:16:29.128 { 00:16:29.128 "subsystem": "sock", 00:16:29.128 "config": [ 00:16:29.128 { 00:16:29.128 "method": "sock_set_default_impl", 00:16:29.128 "params": { 00:16:29.128 "impl_name": "posix" 00:16:29.128 } 00:16:29.128 }, 00:16:29.128 { 00:16:29.128 "method": "sock_impl_set_options", 00:16:29.128 "params": { 00:16:29.128 "impl_name": "ssl", 00:16:29.128 "recv_buf_size": 4096, 00:16:29.128 "send_buf_size": 4096, 00:16:29.128 "enable_recv_pipe": true, 00:16:29.128 "enable_quickack": false, 00:16:29.128 "enable_placement_id": 0, 00:16:29.128 "enable_zerocopy_send_server": true, 00:16:29.128 "enable_zerocopy_send_client": false, 00:16:29.128 "zerocopy_threshold": 0, 00:16:29.128 "tls_version": 0, 00:16:29.128 "enable_ktls": false 00:16:29.128 } 00:16:29.128 }, 00:16:29.128 { 00:16:29.128 "method": "sock_impl_set_options", 00:16:29.128 "params": { 00:16:29.128 "impl_name": "posix", 00:16:29.128 "recv_buf_size": 2097152, 00:16:29.128 "send_buf_size": 2097152, 00:16:29.128 "enable_recv_pipe": true, 00:16:29.128 "enable_quickack": false, 00:16:29.128 "enable_placement_id": 0, 00:16:29.128 "enable_zerocopy_send_server": true, 00:16:29.128 "enable_zerocopy_send_client": false, 00:16:29.128 "zerocopy_threshold": 0, 00:16:29.128 "tls_version": 0, 00:16:29.128 "enable_ktls": false 00:16:29.128 } 00:16:29.128 } 00:16:29.128 ] 00:16:29.128 }, 00:16:29.128 { 00:16:29.128 "subsystem": "vmd", 00:16:29.128 "config": [] 00:16:29.128 }, 00:16:29.128 { 00:16:29.128 "subsystem": "accel", 00:16:29.128 "config": [ 00:16:29.128 { 00:16:29.128 "method": "accel_set_options", 00:16:29.128 "params": { 00:16:29.128 "small_cache_size": 128, 00:16:29.128 "large_cache_size": 16, 00:16:29.128 "task_count": 2048, 00:16:29.128 "sequence_count": 2048, 00:16:29.128 "buf_count": 2048 00:16:29.128 } 00:16:29.128 } 00:16:29.128 ] 00:16:29.128 }, 00:16:29.128 { 00:16:29.128 "subsystem": "bdev", 00:16:29.128 "config": [ 00:16:29.128 { 00:16:29.128 "method": "bdev_set_options", 00:16:29.128 "params": { 00:16:29.128 "bdev_io_pool_size": 65535, 00:16:29.128 "bdev_io_cache_size": 256, 00:16:29.128 "bdev_auto_examine": true, 00:16:29.128 "iobuf_small_cache_size": 128, 00:16:29.128 "iobuf_large_cache_size": 16 00:16:29.128 } 00:16:29.128 }, 00:16:29.128 { 00:16:29.128 "method": "bdev_raid_set_options", 00:16:29.128 "params": { 00:16:29.128 "process_window_size_kb": 1024, 00:16:29.128 "process_max_bandwidth_mb_sec": 0 00:16:29.128 } 00:16:29.128 }, 00:16:29.128 { 00:16:29.128 "method": "bdev_iscsi_set_options", 00:16:29.128 "params": { 00:16:29.128 "timeout_sec": 30 00:16:29.128 } 00:16:29.128 }, 00:16:29.128 { 00:16:29.128 "method": "bdev_nvme_set_options", 00:16:29.128 "params": { 00:16:29.128 "action_on_timeout": "none", 00:16:29.128 "timeout_us": 0, 00:16:29.128 "timeout_admin_us": 0, 00:16:29.128 "keep_alive_timeout_ms": 10000, 00:16:29.128 "arbitration_burst": 0, 00:16:29.128 "low_priority_weight": 0, 00:16:29.128 "medium_priority_weight": 0, 00:16:29.128 "high_priority_weight": 0, 00:16:29.128 "nvme_adminq_poll_period_us": 10000, 00:16:29.128 "nvme_ioq_poll_period_us": 0, 00:16:29.128 "io_queue_requests": 0, 00:16:29.128 "delay_cmd_submit": true, 00:16:29.129 "transport_retry_count": 4, 00:16:29.129 "bdev_retry_count": 3, 00:16:29.129 "transport_ack_timeout": 0, 00:16:29.129 "ctrlr_loss_timeout_sec": 0, 00:16:29.129 "reconnect_delay_sec": 0, 00:16:29.129 "fast_io_fail_timeout_sec": 0, 00:16:29.129 "disable_auto_failback": false, 00:16:29.129 "generate_uuids": false, 00:16:29.129 "transport_tos": 0, 00:16:29.129 "nvme_error_stat": false, 00:16:29.129 "rdma_srq_size": 0, 00:16:29.129 "io_path_stat": false, 00:16:29.129 "allow_accel_sequence": false, 00:16:29.129 "rdma_max_cq_size": 0, 00:16:29.129 "rdma_cm_event_timeout_ms": 0, 00:16:29.129 "dhchap_digests": [ 00:16:29.129 "sha256", 00:16:29.129 "sha384", 00:16:29.129 "sha512" 00:16:29.129 ], 00:16:29.129 "dhchap_dhgroups": [ 00:16:29.129 "null", 00:16:29.129 "ffdhe2048", 00:16:29.129 "ffdhe3072", 00:16:29.129 "ffdhe4096", 00:16:29.129 "ffdhe6144", 00:16:29.129 "ffdhe8192" 00:16:29.129 ] 00:16:29.129 } 00:16:29.129 }, 00:16:29.129 { 00:16:29.129 "method": "bdev_nvme_set_hotplug", 00:16:29.129 "params": { 00:16:29.129 "period_us": 100000, 00:16:29.129 "enable": false 00:16:29.129 } 00:16:29.129 }, 00:16:29.129 { 00:16:29.129 "method": "bdev_malloc_create", 00:16:29.129 "params": { 00:16:29.129 "name": "malloc0", 00:16:29.129 "num_blocks": 8192, 00:16:29.129 "block_size": 4096, 00:16:29.129 "physical_block_size": 4096, 00:16:29.129 "uuid": "4e3677f9-b66f-41a2-af95-e1281ef66b7a", 00:16:29.129 "optimal_io_boundary": 0, 00:16:29.129 "md_size": 0, 00:16:29.129 "dif_type": 0, 00:16:29.129 "dif_is_head_of_md": false, 00:16:29.129 "dif_pi_format": 0 00:16:29.129 } 00:16:29.129 }, 00:16:29.129 { 00:16:29.129 "method": "bdev_wait_for_examine" 00:16:29.129 } 00:16:29.129 ] 00:16:29.129 }, 00:16:29.129 { 00:16:29.129 "subsystem": "scsi", 00:16:29.129 "config": null 00:16:29.129 }, 00:16:29.129 { 00:16:29.129 "subsystem": "scheduler", 00:16:29.129 "config": [ 00:16:29.129 { 00:16:29.129 "method": "framework_set_scheduler", 00:16:29.129 "params": { 00:16:29.129 "name": "static" 00:16:29.129 } 00:16:29.129 } 00:16:29.129 ] 00:16:29.129 }, 00:16:29.129 { 00:16:29.129 "subsystem": "vhost_scsi", 00:16:29.129 "config": [] 00:16:29.129 }, 00:16:29.129 { 00:16:29.129 "subsystem": "vhost_blk", 00:16:29.129 "config": [] 00:16:29.129 }, 00:16:29.129 { 00:16:29.129 "subsystem": "ublk", 00:16:29.129 "config": [ 00:16:29.129 { 00:16:29.129 "method": "ublk_create_target", 00:16:29.129 "params": { 00:16:29.129 "cpumask": "1" 00:16:29.129 } 00:16:29.129 }, 00:16:29.129 { 00:16:29.129 "method": "ublk_start_disk", 00:16:29.129 "params": { 00:16:29.129 "bdev_name": "malloc0", 00:16:29.129 "ublk_id": 0, 00:16:29.129 "num_queues": 1, 00:16:29.129 "queue_depth": 128 00:16:29.129 } 00:16:29.129 } 00:16:29.129 ] 00:16:29.129 }, 00:16:29.129 { 00:16:29.129 "subsystem": "nbd", 00:16:29.129 "config": [] 00:16:29.129 }, 00:16:29.129 { 00:16:29.129 "subsystem": "nvmf", 00:16:29.129 "config": [ 00:16:29.129 { 00:16:29.129 "method": "nvmf_set_config", 00:16:29.129 "params": { 00:16:29.129 "discovery_filter": "match_any", 00:16:29.129 "admin_cmd_passthru": { 00:16:29.129 "identify_ctrlr": false 00:16:29.129 }, 00:16:29.129 "dhchap_digests": [ 00:16:29.129 "sha256", 00:16:29.129 "sha384", 00:16:29.129 "sha512" 00:16:29.129 ], 00:16:29.129 "dhchap_dhgroups": [ 00:16:29.129 "null", 00:16:29.129 "ffdhe2048", 00:16:29.129 "ffdhe3072", 00:16:29.129 "ffdhe4096", 00:16:29.129 "ffdhe6144", 00:16:29.129 "ffdhe8192" 00:16:29.129 ] 00:16:29.129 } 00:16:29.129 }, 00:16:29.129 { 00:16:29.129 "method": "nvmf_set_max_subsystems", 00:16:29.129 "params": { 00:16:29.129 "max_subsystems": 1024 00:16:29.129 } 00:16:29.129 }, 00:16:29.129 { 00:16:29.129 "method": "nvmf_set_crdt", 00:16:29.129 "params": { 00:16:29.129 "crdt1": 0, 00:16:29.129 "crdt2": 0, 00:16:29.129 "crdt3": 0 00:16:29.129 } 00:16:29.129 } 00:16:29.129 ] 00:16:29.129 }, 00:16:29.129 { 00:16:29.129 "subsystem": "iscsi", 00:16:29.129 "config": [ 00:16:29.129 { 00:16:29.129 "method": "iscsi_set_options", 00:16:29.129 "params": { 00:16:29.129 "node_base": "iqn.2016-06.io.spdk", 00:16:29.129 "max_sessions": 128, 00:16:29.129 "max_connections_per_session": 2, 00:16:29.129 "max_queue_depth": 64, 00:16:29.129 "default_time2wait": 2, 00:16:29.129 "default_time2retain": 20, 00:16:29.129 "first_burst_length": 8192, 00:16:29.129 "immediate_data": true, 00:16:29.129 "allow_duplicated_isid": false, 00:16:29.129 "error_recovery_level": 0, 00:16:29.129 "nop_timeout": 60, 00:16:29.129 "nop_in_interval": 30, 00:16:29.129 "disable_chap": false, 00:16:29.129 "require_chap": false, 00:16:29.129 "mutual_chap": false, 00:16:29.129 "chap_group": 0, 00:16:29.129 "max_large_datain_per_connection": 64, 00:16:29.129 "max_r2t_per_connection": 4, 00:16:29.129 "pdu_pool_size": 36864, 00:16:29.129 "immediate_data_pool_size": 16384, 00:16:29.129 "data_out_pool_size": 2048 00:16:29.129 } 00:16:29.129 } 00:16:29.129 ] 00:16:29.129 } 00:16:29.129 ] 00:16:29.129 }' 00:16:29.129 10:22:08 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 84602 00:16:29.129 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 84602 ']' 00:16:29.129 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 84602 00:16:29.129 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:29.129 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.129 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84602 00:16:29.129 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.129 killing process with pid 84602 00:16:29.129 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.129 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84602' 00:16:29.129 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 84602 00:16:29.129 10:22:08 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 84602 00:16:29.391 [2024-11-29 10:22:08.725720] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:29.391 [2024-11-29 10:22:08.769855] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:29.391 [2024-11-29 10:22:08.769999] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:29.391 [2024-11-29 10:22:08.777837] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:29.391 [2024-11-29 10:22:08.777907] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:29.391 [2024-11-29 10:22:08.777916] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:29.391 [2024-11-29 10:22:08.777946] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:29.391 [2024-11-29 10:22:08.778103] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:29.965 10:22:09 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=84640 00:16:29.965 10:22:09 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 84640 00:16:29.965 10:22:09 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 84640 ']' 00:16:29.965 10:22:09 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.966 10:22:09 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.966 10:22:09 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.966 10:22:09 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.966 10:22:09 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:29.966 10:22:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:29.966 10:22:09 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:29.966 "subsystems": [ 00:16:29.966 { 00:16:29.966 "subsystem": "fsdev", 00:16:29.966 "config": [ 00:16:29.966 { 00:16:29.966 "method": "fsdev_set_opts", 00:16:29.966 "params": { 00:16:29.966 "fsdev_io_pool_size": 65535, 00:16:29.966 "fsdev_io_cache_size": 256 00:16:29.966 } 00:16:29.966 } 00:16:29.966 ] 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "subsystem": "keyring", 00:16:29.966 "config": [] 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "subsystem": "iobuf", 00:16:29.966 "config": [ 00:16:29.966 { 00:16:29.966 "method": "iobuf_set_options", 00:16:29.966 "params": { 00:16:29.966 "small_pool_count": 8192, 00:16:29.966 "large_pool_count": 1024, 00:16:29.966 "small_bufsize": 8192, 00:16:29.966 "large_bufsize": 135168, 00:16:29.966 "enable_numa": false 00:16:29.966 } 00:16:29.966 } 00:16:29.966 ] 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "subsystem": "sock", 00:16:29.966 "config": [ 00:16:29.966 { 00:16:29.966 "method": "sock_set_default_impl", 00:16:29.966 "params": { 00:16:29.966 "impl_name": "posix" 00:16:29.966 } 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "method": "sock_impl_set_options", 00:16:29.966 "params": { 00:16:29.966 "impl_name": "ssl", 00:16:29.966 "recv_buf_size": 4096, 00:16:29.966 "send_buf_size": 4096, 00:16:29.966 "enable_recv_pipe": true, 00:16:29.966 "enable_quickack": false, 00:16:29.966 "enable_placement_id": 0, 00:16:29.966 "enable_zerocopy_send_server": true, 00:16:29.966 "enable_zerocopy_send_client": false, 00:16:29.966 "zerocopy_threshold": 0, 00:16:29.966 "tls_version": 0, 00:16:29.966 "enable_ktls": false 00:16:29.966 } 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "method": "sock_impl_set_options", 00:16:29.966 "params": { 00:16:29.966 "impl_name": "posix", 00:16:29.966 "recv_buf_size": 2097152, 00:16:29.966 "send_buf_size": 2097152, 00:16:29.966 "enable_recv_pipe": true, 00:16:29.966 "enable_quickack": false, 00:16:29.966 "enable_placement_id": 0, 00:16:29.966 "enable_zerocopy_send_server": true, 00:16:29.966 "enable_zerocopy_send_client": false, 00:16:29.966 "zerocopy_threshold": 0, 00:16:29.966 "tls_version": 0, 00:16:29.966 "enable_ktls": false 00:16:29.966 } 00:16:29.966 } 00:16:29.966 ] 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "subsystem": "vmd", 00:16:29.966 "config": [] 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "subsystem": "accel", 00:16:29.966 "config": [ 00:16:29.966 { 00:16:29.966 "method": "accel_set_options", 00:16:29.966 "params": { 00:16:29.966 "small_cache_size": 128, 00:16:29.966 "large_cache_size": 16, 00:16:29.966 "task_count": 2048, 00:16:29.966 "sequence_count": 2048, 00:16:29.966 "buf_count": 2048 00:16:29.966 } 00:16:29.966 } 00:16:29.966 ] 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "subsystem": "bdev", 00:16:29.966 "config": [ 00:16:29.966 { 00:16:29.966 "method": "bdev_set_options", 00:16:29.966 "params": { 00:16:29.966 "bdev_io_pool_size": 65535, 00:16:29.966 "bdev_io_cache_size": 256, 00:16:29.966 "bdev_auto_examine": true, 00:16:29.966 "iobuf_small_cache_size": 128, 00:16:29.966 "iobuf_large_cache_size": 16 00:16:29.966 } 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "method": "bdev_raid_set_options", 00:16:29.966 "params": { 00:16:29.966 "process_window_size_kb": 1024, 00:16:29.966 "process_max_bandwidth_mb_sec": 0 00:16:29.966 } 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "method": "bdev_iscsi_set_options", 00:16:29.966 "params": { 00:16:29.966 "timeout_sec": 30 00:16:29.966 } 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "method": "bdev_nvme_set_options", 00:16:29.966 "params": { 00:16:29.966 "action_on_timeout": "none", 00:16:29.966 "timeout_us": 0, 00:16:29.966 "timeout_admin_us": 0, 00:16:29.966 "keep_alive_timeout_ms": 10000, 00:16:29.966 "arbitration_burst": 0, 00:16:29.966 "low_priority_weight": 0, 00:16:29.966 "medium_priority_weight": 0, 00:16:29.966 "high_priority_weight": 0, 00:16:29.966 "nvme_adminq_poll_period_us": 10000, 00:16:29.966 "nvme_ioq_poll_period_us": 0, 00:16:29.966 "io_queue_requests": 0, 00:16:29.966 "delay_cmd_submit": true, 00:16:29.966 "transport_retry_count": 4, 00:16:29.966 "bdev_retry_count": 3, 00:16:29.966 "transport_ack_timeout": 0, 00:16:29.966 "ctrlr_loss_timeout_sec": 0, 00:16:29.966 "reconnect_delay_sec": 0, 00:16:29.966 "fast_io_fail_timeout_sec": 0, 00:16:29.966 "disable_auto_failback": false, 00:16:29.966 "generate_uuids": false, 00:16:29.966 "transport_tos": 0, 00:16:29.966 "nvme_error_stat": false, 00:16:29.966 "rdma_srq_size": 0, 00:16:29.966 "io_path_stat": false, 00:16:29.966 "allow_accel_sequence": false, 00:16:29.966 "rdma_max_cq_size": 0, 00:16:29.966 "rdma_cm_event_timeout_ms": 0, 00:16:29.966 "dhchap_digests": [ 00:16:29.966 "sha256", 00:16:29.966 "sha384", 00:16:29.966 "sha512" 00:16:29.966 ], 00:16:29.966 "dhchap_dhgroups": [ 00:16:29.966 "null", 00:16:29.966 "ffdhe2048", 00:16:29.966 "ffdhe3072", 00:16:29.966 "ffdhe4096", 00:16:29.966 "ffdhe6144", 00:16:29.966 "ffdhe8192" 00:16:29.966 ] 00:16:29.966 } 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "method": "bdev_nvme_set_hotplug", 00:16:29.966 "params": { 00:16:29.966 "period_us": 100000, 00:16:29.966 "enable": false 00:16:29.966 } 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "method": "bdev_malloc_create", 00:16:29.966 "params": { 00:16:29.966 "name": "malloc0", 00:16:29.966 "num_blocks": 8192, 00:16:29.966 "block_size": 4096, 00:16:29.966 "physical_block_size": 4096, 00:16:29.966 "uuid": "4e3677f9-b66f-41a2-af95-e1281ef66b7a", 00:16:29.966 "optimal_io_boundary": 0, 00:16:29.966 "md_size": 0, 00:16:29.966 "dif_type": 0, 00:16:29.966 "dif_is_head_of_md": false, 00:16:29.966 "dif_pi_format": 0 00:16:29.966 } 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "method": "bdev_wait_for_examine" 00:16:29.966 } 00:16:29.966 ] 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "subsystem": "scsi", 00:16:29.966 "config": null 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "subsystem": "scheduler", 00:16:29.966 "config": [ 00:16:29.966 { 00:16:29.966 "method": "framework_set_scheduler", 00:16:29.966 "params": { 00:16:29.966 "name": "static" 00:16:29.966 } 00:16:29.966 } 00:16:29.966 ] 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "subsystem": "vhost_scsi", 00:16:29.966 "config": [] 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "subsystem": "vhost_blk", 00:16:29.966 "config": [] 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "subsystem": "ublk", 00:16:29.966 "config": [ 00:16:29.966 { 00:16:29.966 "method": "ublk_create_target", 00:16:29.966 "params": { 00:16:29.966 "cpumask": "1" 00:16:29.966 } 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "method": "ublk_start_disk", 00:16:29.966 "params": { 00:16:29.966 "bdev_name": "malloc0", 00:16:29.966 "ublk_id": 0, 00:16:29.966 "num_queues": 1, 00:16:29.966 "queue_depth": 128 00:16:29.966 } 00:16:29.966 } 00:16:29.966 ] 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "subsystem": "nbd", 00:16:29.966 "config": [] 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "subsystem": "nvmf", 00:16:29.966 "config": [ 00:16:29.966 { 00:16:29.966 "method": "nvmf_set_config", 00:16:29.966 "params": { 00:16:29.966 "discovery_filter": "match_any", 00:16:29.966 "admin_cmd_passthru": { 00:16:29.966 "identify_ctrlr": false 00:16:29.966 }, 00:16:29.966 "dhchap_digests": [ 00:16:29.966 "sha256", 00:16:29.966 "sha384", 00:16:29.966 "sha512" 00:16:29.966 ], 00:16:29.966 "dhchap_dhgroups": [ 00:16:29.966 "null", 00:16:29.966 "ffdhe2048", 00:16:29.966 "ffdhe3072", 00:16:29.966 "ffdhe4096", 00:16:29.966 "ffdhe6144", 00:16:29.966 "ffdhe8192" 00:16:29.966 ] 00:16:29.966 } 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "method": "nvmf_set_max_subsystems", 00:16:29.966 "params": { 00:16:29.966 "max_subsystems": 1024 00:16:29.966 } 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "method": "nvmf_set_crdt", 00:16:29.966 "params": { 00:16:29.966 "crdt1": 0, 00:16:29.966 "crdt2": 0, 00:16:29.966 "crdt3": 0 00:16:29.966 } 00:16:29.966 } 00:16:29.966 ] 00:16:29.966 }, 00:16:29.966 { 00:16:29.966 "subsystem": "iscsi", 00:16:29.966 "config": [ 00:16:29.966 { 00:16:29.966 "method": "iscsi_set_options", 00:16:29.966 "params": { 00:16:29.966 "node_base": "iqn.2016-06.io.spdk", 00:16:29.966 "max_sessions": 128, 00:16:29.966 "max_connections_per_session": 2, 00:16:29.967 "max_queue_depth": 64, 00:16:29.967 "default_time2wait": 2, 00:16:29.967 "default_time2retain": 20, 00:16:29.967 "first_burst_length": 8192, 00:16:29.967 "immediate_data": true, 00:16:29.967 "allow_duplicated_isid": false, 00:16:29.967 "error_recovery_level": 0, 00:16:29.967 "nop_timeout": 60, 00:16:29.967 "nop_in_interval": 30, 00:16:29.967 "disable_chap": false, 00:16:29.967 "require_chap": false, 00:16:29.967 "mutual_chap": false, 00:16:29.967 "chap_group": 0, 00:16:29.967 "max_large_datain_per_connection": 64, 00:16:29.967 "max_r2t_per_connection": 4, 00:16:29.967 "pdu_pool_size": 36864, 00:16:29.967 "immediate_data_pool_size": 16384, 00:16:29.967 "data_out_pool_size": 2048 00:16:29.967 } 00:16:29.967 } 00:16:29.967 ] 00:16:29.967 } 00:16:29.967 ] 00:16:29.967 }' 00:16:29.967 [2024-11-29 10:22:09.315978] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:29.967 [2024-11-29 10:22:09.316127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84640 ] 00:16:30.228 [2024-11-29 10:22:09.463899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.228 [2024-11-29 10:22:09.493235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.490 [2024-11-29 10:22:09.884825] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:30.490 [2024-11-29 10:22:09.885200] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:30.490 [2024-11-29 10:22:09.892991] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:30.490 [2024-11-29 10:22:09.893070] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:30.490 [2024-11-29 10:22:09.893082] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:30.490 [2024-11-29 10:22:09.893092] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:30.490 [2024-11-29 10:22:09.901923] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:30.490 [2024-11-29 10:22:09.901955] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:30.490 [2024-11-29 10:22:09.908844] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:30.490 [2024-11-29 10:22:09.908962] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:30.490 [2024-11-29 10:22:09.925830] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:30.752 10:22:10 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.752 10:22:10 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:30.752 10:22:10 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:30.752 10:22:10 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.752 10:22:10 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:30.752 10:22:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:30.752 10:22:10 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.752 10:22:10 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:30.752 10:22:10 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:30.752 10:22:10 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 84640 00:16:30.752 10:22:10 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 84640 ']' 00:16:30.752 10:22:10 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 84640 00:16:30.752 10:22:10 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:30.752 10:22:10 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.013 10:22:10 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84640 00:16:31.013 killing process with pid 84640 00:16:31.013 10:22:10 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:31.013 10:22:10 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:31.013 10:22:10 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84640' 00:16:31.013 10:22:10 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 84640 00:16:31.013 10:22:10 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 84640 00:16:31.274 [2024-11-29 10:22:10.525134] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:31.274 [2024-11-29 10:22:10.563950] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:31.274 [2024-11-29 10:22:10.564100] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:31.274 [2024-11-29 10:22:10.571837] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:31.274 [2024-11-29 10:22:10.571901] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:31.274 [2024-11-29 10:22:10.571918] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:31.274 [2024-11-29 10:22:10.571952] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:31.274 [2024-11-29 10:22:10.572110] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:31.847 10:22:11 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:31.847 00:16:31.847 real 0m3.935s 00:16:31.847 user 0m2.692s 00:16:31.847 sys 0m1.929s 00:16:31.847 10:22:11 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.847 ************************************ 00:16:31.847 END TEST test_save_ublk_config 00:16:31.847 ************************************ 00:16:31.847 10:22:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:31.847 10:22:11 ublk -- ublk/ublk.sh@139 -- # spdk_pid=84691 00:16:31.847 10:22:11 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:31.847 10:22:11 ublk -- ublk/ublk.sh@141 -- # waitforlisten 84691 00:16:31.847 10:22:11 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:31.847 10:22:11 ublk -- common/autotest_common.sh@835 -- # '[' -z 84691 ']' 00:16:31.847 10:22:11 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.847 10:22:11 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.847 10:22:11 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.847 10:22:11 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.847 10:22:11 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.848 [2024-11-29 10:22:11.166127] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:31.848 [2024-11-29 10:22:11.166280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84691 ] 00:16:32.109 [2024-11-29 10:22:11.316896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:32.109 [2024-11-29 10:22:11.347761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.109 [2024-11-29 10:22:11.347895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.683 10:22:12 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.683 10:22:12 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:32.683 10:22:12 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:32.683 10:22:12 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:32.683 10:22:12 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.683 10:22:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:32.683 ************************************ 00:16:32.683 START TEST test_create_ublk 00:16:32.683 ************************************ 00:16:32.683 10:22:12 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:32.683 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:32.683 10:22:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.683 10:22:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:32.683 [2024-11-29 10:22:12.043827] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:32.683 [2024-11-29 10:22:12.045599] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:32.683 10:22:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.683 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:32.683 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:32.683 10:22:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.683 10:22:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:32.683 10:22:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.683 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:32.683 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:32.683 10:22:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.683 10:22:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:32.683 [2024-11-29 10:22:12.136981] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:32.683 [2024-11-29 10:22:12.137456] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:32.683 [2024-11-29 10:22:12.137474] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:32.683 [2024-11-29 10:22:12.137484] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:32.683 [2024-11-29 10:22:12.144855] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:32.683 [2024-11-29 10:22:12.144896] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:32.945 [2024-11-29 10:22:12.152846] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:32.945 [2024-11-29 10:22:12.153563] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:32.945 [2024-11-29 10:22:12.183846] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:32.945 10:22:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.945 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:32.945 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:32.945 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:32.945 10:22:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.945 10:22:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:32.945 10:22:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.945 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:32.945 { 00:16:32.945 "ublk_device": "/dev/ublkb0", 00:16:32.945 "id": 0, 00:16:32.945 "queue_depth": 512, 00:16:32.945 "num_queues": 4, 00:16:32.945 "bdev_name": "Malloc0" 00:16:32.945 } 00:16:32.945 ]' 00:16:32.945 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:32.945 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:32.945 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:32.945 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:32.945 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:32.945 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:32.945 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:32.945 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:32.945 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:32.945 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:32.945 10:22:12 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:32.945 10:22:12 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:32.945 10:22:12 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:32.945 10:22:12 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:32.945 10:22:12 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:32.945 10:22:12 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:32.945 10:22:12 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:32.945 10:22:12 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:32.945 10:22:12 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:32.945 10:22:12 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:32.945 10:22:12 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:32.945 10:22:12 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:33.207 fio: verification read phase will never start because write phase uses all of runtime 00:16:33.207 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:33.207 fio-3.35 00:16:33.207 Starting 1 process 00:16:43.190 00:16:43.190 fio_test: (groupid=0, jobs=1): err= 0: pid=84735: Fri Nov 29 10:22:22 2024 00:16:43.190 write: IOPS=14.7k, BW=57.4MiB/s (60.2MB/s)(574MiB/10003msec); 0 zone resets 00:16:43.190 clat (usec): min=46, max=4101, avg=67.32, stdev=93.80 00:16:43.190 lat (usec): min=46, max=4101, avg=67.74, stdev=93.82 00:16:43.190 clat percentiles (usec): 00:16:43.190 | 1.00th=[ 51], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 58], 00:16:43.190 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 64], 00:16:43.190 | 70.00th=[ 65], 80.00th=[ 68], 90.00th=[ 73], 95.00th=[ 79], 00:16:43.190 | 99.00th=[ 116], 99.50th=[ 137], 99.90th=[ 1827], 99.95th=[ 2671], 00:16:43.190 | 99.99th=[ 3589] 00:16:43.190 bw ( KiB/s): min=40031, max=64784, per=99.88%, avg=58704.79, stdev=5685.10, samples=19 00:16:43.190 iops : min=10007, max=16196, avg=14676.16, stdev=1421.41, samples=19 00:16:43.190 lat (usec) : 50=0.49%, 100=97.54%, 250=1.70%, 500=0.08%, 750=0.02% 00:16:43.190 lat (usec) : 1000=0.01% 00:16:43.190 lat (msec) : 2=0.06%, 4=0.09%, 10=0.01% 00:16:43.190 cpu : usr=2.15%, sys=13.72%, ctx=146977, majf=0, minf=794 00:16:43.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.190 issued rwts: total=0,146986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.190 00:16:43.190 Run status group 0 (all jobs): 00:16:43.190 WRITE: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=574MiB (602MB), run=10003-10003msec 00:16:43.190 00:16:43.190 Disk stats (read/write): 00:16:43.190 ublkb0: ios=0/145397, merge=0/0, ticks=0/8188, in_queue=8189, util=99.07% 00:16:43.190 10:22:22 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:43.190 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.190 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.190 [2024-11-29 10:22:22.620360] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:43.449 [2024-11-29 10:22:22.665816] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:43.449 [2024-11-29 10:22:22.666550] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:43.449 [2024-11-29 10:22:22.676859] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:43.449 [2024-11-29 10:22:22.677115] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:43.449 [2024-11-29 10:22:22.677122] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:43.449 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.449 10:22:22 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:43.449 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:43.449 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.450 [2024-11-29 10:22:22.693908] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:43.450 request: 00:16:43.450 { 00:16:43.450 "ublk_id": 0, 00:16:43.450 "method": "ublk_stop_disk", 00:16:43.450 "req_id": 1 00:16:43.450 } 00:16:43.450 Got JSON-RPC error response 00:16:43.450 response: 00:16:43.450 { 00:16:43.450 "code": -19, 00:16:43.450 "message": "No such device" 00:16:43.450 } 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:43.450 10:22:22 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.450 [2024-11-29 10:22:22.707886] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:43.450 [2024-11-29 10:22:22.710100] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:43.450 [2024-11-29 10:22:22.710128] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.450 10:22:22 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.450 10:22:22 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:43.450 10:22:22 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.450 10:22:22 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:43.450 10:22:22 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:43.450 10:22:22 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:43.450 10:22:22 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.450 10:22:22 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:43.450 10:22:22 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:43.450 ************************************ 00:16:43.450 END TEST test_create_ublk 00:16:43.450 ************************************ 00:16:43.450 10:22:22 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:43.450 00:16:43.450 real 0m10.845s 00:16:43.450 user 0m0.526s 00:16:43.450 sys 0m1.456s 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.450 10:22:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.450 10:22:22 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:43.450 10:22:22 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:43.450 10:22:22 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.450 10:22:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.709 ************************************ 00:16:43.709 START TEST test_create_multi_ublk 00:16:43.709 ************************************ 00:16:43.709 10:22:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:43.709 10:22:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:43.709 10:22:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.709 10:22:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.709 [2024-11-29 10:22:22.926818] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:43.709 [2024-11-29 10:22:22.927675] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:43.709 10:22:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.709 10:22:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:43.709 10:22:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:43.709 10:22:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.709 10:22:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:43.709 10:22:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.709 10:22:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.709 10:22:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.709 10:22:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:43.709 10:22:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:43.709 10:22:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.709 10:22:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.709 [2024-11-29 10:22:22.998939] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:43.709 [2024-11-29 10:22:22.999231] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:43.709 [2024-11-29 10:22:22.999244] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:43.709 [2024-11-29 10:22:22.999249] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:43.709 [2024-11-29 10:22:23.022833] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:43.709 [2024-11-29 10:22:23.022851] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:43.709 [2024-11-29 10:22:23.034821] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:43.709 [2024-11-29 10:22:23.035291] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:43.709 [2024-11-29 10:22:23.074829] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:43.709 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.709 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:43.709 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.709 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:43.709 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.709 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.709 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.709 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:43.709 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:43.709 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.709 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.709 [2024-11-29 10:22:23.158907] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:43.709 [2024-11-29 10:22:23.159201] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:43.709 [2024-11-29 10:22:23.159211] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:43.709 [2024-11-29 10:22:23.159217] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:43.709 [2024-11-29 10:22:23.170837] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:43.709 [2024-11-29 10:22:23.170856] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:43.968 [2024-11-29 10:22:23.182826] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:43.968 [2024-11-29 10:22:23.183309] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:43.968 [2024-11-29 10:22:23.218821] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.968 [2024-11-29 10:22:23.302914] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:43.968 [2024-11-29 10:22:23.303202] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:43.968 [2024-11-29 10:22:23.303215] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:43.968 [2024-11-29 10:22:23.303220] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:43.968 [2024-11-29 10:22:23.314838] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:43.968 [2024-11-29 10:22:23.314854] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:43.968 [2024-11-29 10:22:23.326832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:43.968 [2024-11-29 10:22:23.327306] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:43.968 [2024-11-29 10:22:23.362832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.968 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.227 [2024-11-29 10:22:23.446911] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:44.227 [2024-11-29 10:22:23.447209] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:44.227 [2024-11-29 10:22:23.447222] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:44.227 [2024-11-29 10:22:23.447228] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:44.227 [2024-11-29 10:22:23.458841] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:44.227 [2024-11-29 10:22:23.458862] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:44.227 [2024-11-29 10:22:23.470832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:44.227 [2024-11-29 10:22:23.471304] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:44.227 [2024-11-29 10:22:23.502823] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:44.227 { 00:16:44.227 "ublk_device": "/dev/ublkb0", 00:16:44.227 "id": 0, 00:16:44.227 "queue_depth": 512, 00:16:44.227 "num_queues": 4, 00:16:44.227 "bdev_name": "Malloc0" 00:16:44.227 }, 00:16:44.227 { 00:16:44.227 "ublk_device": "/dev/ublkb1", 00:16:44.227 "id": 1, 00:16:44.227 "queue_depth": 512, 00:16:44.227 "num_queues": 4, 00:16:44.227 "bdev_name": "Malloc1" 00:16:44.227 }, 00:16:44.227 { 00:16:44.227 "ublk_device": "/dev/ublkb2", 00:16:44.227 "id": 2, 00:16:44.227 "queue_depth": 512, 00:16:44.227 "num_queues": 4, 00:16:44.227 "bdev_name": "Malloc2" 00:16:44.227 }, 00:16:44.227 { 00:16:44.227 "ublk_device": "/dev/ublkb3", 00:16:44.227 "id": 3, 00:16:44.227 "queue_depth": 512, 00:16:44.227 "num_queues": 4, 00:16:44.227 "bdev_name": "Malloc3" 00:16:44.227 } 00:16:44.227 ]' 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:44.227 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:44.228 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:44.486 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:44.746 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:44.746 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:44.746 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:44.746 10:22:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.746 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.746 [2024-11-29 10:22:24.194891] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:45.004 [2024-11-29 10:22:24.237817] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:45.004 [2024-11-29 10:22:24.238714] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:45.004 [2024-11-29 10:22:24.245838] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:45.004 [2024-11-29 10:22:24.246087] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:45.005 [2024-11-29 10:22:24.246098] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.005 [2024-11-29 10:22:24.261895] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:45.005 [2024-11-29 10:22:24.298338] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:45.005 [2024-11-29 10:22:24.299493] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:45.005 [2024-11-29 10:22:24.304835] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:45.005 [2024-11-29 10:22:24.305082] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:45.005 [2024-11-29 10:22:24.305093] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.005 [2024-11-29 10:22:24.320889] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:45.005 [2024-11-29 10:22:24.362314] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:45.005 [2024-11-29 10:22:24.363389] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:45.005 [2024-11-29 10:22:24.372822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:45.005 [2024-11-29 10:22:24.373049] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:45.005 [2024-11-29 10:22:24.373061] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.005 [2024-11-29 10:22:24.388894] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:45.005 [2024-11-29 10:22:24.423854] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:45.005 [2024-11-29 10:22:24.424514] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:45.005 [2024-11-29 10:22:24.432826] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:45.005 [2024-11-29 10:22:24.433062] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:45.005 [2024-11-29 10:22:24.433072] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.005 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:45.264 [2024-11-29 10:22:24.624876] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:45.264 [2024-11-29 10:22:24.626583] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:45.264 [2024-11-29 10:22:24.626612] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:45.264 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:45.264 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:45.264 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:45.264 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.264 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.264 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.264 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:45.264 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:45.264 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.264 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.522 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.522 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:45.522 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:45.522 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.522 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.522 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.522 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:45.522 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:45.522 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:45.523 ************************************ 00:16:45.523 END TEST test_create_multi_ublk 00:16:45.523 ************************************ 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:45.523 00:16:45.523 real 0m2.055s 00:16:45.523 user 0m0.822s 00:16:45.523 sys 0m0.160s 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.523 10:22:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.782 10:22:25 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:45.782 10:22:25 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:45.782 10:22:25 ublk -- ublk/ublk.sh@130 -- # killprocess 84691 00:16:45.782 10:22:25 ublk -- common/autotest_common.sh@954 -- # '[' -z 84691 ']' 00:16:45.782 10:22:25 ublk -- common/autotest_common.sh@958 -- # kill -0 84691 00:16:45.782 10:22:25 ublk -- common/autotest_common.sh@959 -- # uname 00:16:45.782 10:22:25 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.782 10:22:25 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84691 00:16:45.782 killing process with pid 84691 00:16:45.782 10:22:25 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.782 10:22:25 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.782 10:22:25 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84691' 00:16:45.782 10:22:25 ublk -- common/autotest_common.sh@973 -- # kill 84691 00:16:45.783 10:22:25 ublk -- common/autotest_common.sh@978 -- # wait 84691 00:16:45.783 [2024-11-29 10:22:25.187157] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:45.783 [2024-11-29 10:22:25.187221] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:46.041 00:16:46.041 real 0m18.552s 00:16:46.041 user 0m28.207s 00:16:46.041 sys 0m8.220s 00:16:46.041 10:22:25 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.041 ************************************ 00:16:46.041 END TEST ublk 00:16:46.041 ************************************ 00:16:46.041 10:22:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.041 10:22:25 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:46.041 10:22:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:46.041 10:22:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.041 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:16:46.041 ************************************ 00:16:46.041 START TEST ublk_recovery 00:16:46.041 ************************************ 00:16:46.041 10:22:25 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:46.302 * Looking for test storage... 00:16:46.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:46.302 10:22:25 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:46.302 10:22:25 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:46.302 10:22:25 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:46.302 10:22:25 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.302 10:22:25 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:46.302 10:22:25 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.302 10:22:25 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:46.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.302 --rc genhtml_branch_coverage=1 00:16:46.302 --rc genhtml_function_coverage=1 00:16:46.302 --rc genhtml_legend=1 00:16:46.302 --rc geninfo_all_blocks=1 00:16:46.302 --rc geninfo_unexecuted_blocks=1 00:16:46.302 00:16:46.302 ' 00:16:46.302 10:22:25 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:46.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.302 --rc genhtml_branch_coverage=1 00:16:46.302 --rc genhtml_function_coverage=1 00:16:46.302 --rc genhtml_legend=1 00:16:46.302 --rc geninfo_all_blocks=1 00:16:46.302 --rc geninfo_unexecuted_blocks=1 00:16:46.302 00:16:46.302 ' 00:16:46.302 10:22:25 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:46.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.302 --rc genhtml_branch_coverage=1 00:16:46.302 --rc genhtml_function_coverage=1 00:16:46.302 --rc genhtml_legend=1 00:16:46.302 --rc geninfo_all_blocks=1 00:16:46.302 --rc geninfo_unexecuted_blocks=1 00:16:46.302 00:16:46.302 ' 00:16:46.302 10:22:25 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:46.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.302 --rc genhtml_branch_coverage=1 00:16:46.302 --rc genhtml_function_coverage=1 00:16:46.302 --rc genhtml_legend=1 00:16:46.302 --rc geninfo_all_blocks=1 00:16:46.302 --rc geninfo_unexecuted_blocks=1 00:16:46.302 00:16:46.302 ' 00:16:46.302 10:22:25 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:46.302 10:22:25 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:46.302 10:22:25 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:46.302 10:22:25 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:46.302 10:22:25 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:46.302 10:22:25 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:46.302 10:22:25 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:46.302 10:22:25 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:46.302 10:22:25 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:46.302 10:22:25 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:46.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.302 10:22:25 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=85056 00:16:46.302 10:22:25 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:46.302 10:22:25 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:46.302 10:22:25 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 85056 00:16:46.302 10:22:25 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85056 ']' 00:16:46.302 10:22:25 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.302 10:22:25 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.302 10:22:25 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.302 10:22:25 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.302 10:22:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:46.302 [2024-11-29 10:22:25.713141] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:46.302 [2024-11-29 10:22:25.713265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85056 ] 00:16:46.564 [2024-11-29 10:22:25.861042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:46.564 [2024-11-29 10:22:25.881761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.564 [2024-11-29 10:22:25.881771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.137 10:22:26 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.137 10:22:26 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:47.137 10:22:26 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:47.137 10:22:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.137 10:22:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.137 [2024-11-29 10:22:26.554822] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:47.137 [2024-11-29 10:22:26.555904] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:47.137 10:22:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.137 10:22:26 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:47.137 10:22:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.137 10:22:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.137 malloc0 00:16:47.137 10:22:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.137 10:22:26 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:47.137 10:22:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.137 10:22:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.137 [2024-11-29 10:22:26.587140] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:47.137 [2024-11-29 10:22:26.587238] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:47.137 [2024-11-29 10:22:26.587252] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:47.137 [2024-11-29 10:22:26.587267] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:47.137 [2024-11-29 10:22:26.595915] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:47.137 [2024-11-29 10:22:26.595940] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:47.398 [2024-11-29 10:22:26.602829] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:47.398 [2024-11-29 10:22:26.602966] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:47.398 [2024-11-29 10:22:26.610903] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:47.398 1 00:16:47.398 10:22:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.398 10:22:26 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:48.343 10:22:27 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=85089 00:16:48.343 10:22:27 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:48.343 10:22:27 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:48.343 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:48.343 fio-3.35 00:16:48.343 Starting 1 process 00:16:53.625 10:22:32 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 85056 00:16:53.625 10:22:32 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:58.911 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 85056 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:58.911 10:22:37 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=85200 00:16:58.911 10:22:37 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:58.911 10:22:37 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:58.911 10:22:37 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 85200 00:16:58.911 10:22:37 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85200 ']' 00:16:58.911 10:22:37 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.911 10:22:37 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.911 10:22:37 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.911 10:22:37 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.911 10:22:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.911 [2024-11-29 10:22:37.709756] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:58.911 [2024-11-29 10:22:37.710379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85200 ] 00:16:58.911 [2024-11-29 10:22:37.857106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:58.911 [2024-11-29 10:22:37.877742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.911 [2024-11-29 10:22:37.877774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.173 10:22:38 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.173 10:22:38 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:59.173 10:22:38 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:59.173 10:22:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.173 10:22:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.173 [2024-11-29 10:22:38.507821] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:59.173 [2024-11-29 10:22:38.508891] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:59.173 10:22:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.173 10:22:38 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:59.173 10:22:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.173 10:22:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.173 malloc0 00:16:59.173 10:22:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.173 10:22:38 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:59.173 10:22:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.173 10:22:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.173 [2024-11-29 10:22:38.540052] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:59.173 [2024-11-29 10:22:38.540089] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:59.173 [2024-11-29 10:22:38.540097] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:59.173 [2024-11-29 10:22:38.547862] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:59.173 [2024-11-29 10:22:38.547879] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:59.173 1 00:16:59.173 10:22:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.173 10:22:38 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 85089 00:17:00.118 [2024-11-29 10:22:39.547907] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:00.118 [2024-11-29 10:22:39.555834] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:00.118 [2024-11-29 10:22:39.555854] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:01.496 [2024-11-29 10:22:40.555882] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:01.496 [2024-11-29 10:22:40.562827] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:01.496 [2024-11-29 10:22:40.562843] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:02.433 [2024-11-29 10:22:41.562869] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:02.433 [2024-11-29 10:22:41.566825] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:02.433 [2024-11-29 10:22:41.566838] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:02.433 [2024-11-29 10:22:41.566844] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:02.433 [2024-11-29 10:22:41.566902] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:24.361 [2024-11-29 10:23:02.855827] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:24.361 [2024-11-29 10:23:02.862408] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:24.361 [2024-11-29 10:23:02.870001] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:24.361 [2024-11-29 10:23:02.870020] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:50.904 00:17:50.904 fio_test: (groupid=0, jobs=1): err= 0: pid=85092: Fri Nov 29 10:23:27 2024 00:17:50.904 read: IOPS=14.1k, BW=55.0MiB/s (57.7MB/s)(3300MiB/60002msec) 00:17:50.904 slat (nsec): min=1008, max=256566, avg=5071.43, stdev=1971.09 00:17:50.904 clat (usec): min=860, max=30256k, avg=4504.61, stdev=263316.15 00:17:50.904 lat (usec): min=866, max=30256k, avg=4509.68, stdev=263316.14 00:17:50.904 clat percentiles (usec): 00:17:50.904 | 1.00th=[ 1778], 5.00th=[ 1860], 10.00th=[ 1876], 20.00th=[ 1909], 00:17:50.904 | 30.00th=[ 1926], 40.00th=[ 1942], 50.00th=[ 1975], 60.00th=[ 1991], 00:17:50.904 | 70.00th=[ 2343], 80.00th=[ 2474], 90.00th=[ 2606], 95.00th=[ 3097], 00:17:50.904 | 99.00th=[ 5145], 99.50th=[ 5604], 99.90th=[ 7242], 99.95th=[ 8455], 00:17:50.904 | 99.99th=[13435] 00:17:50.904 bw ( KiB/s): min=55152, max=125456, per=100.00%, avg=112568.95, stdev=16192.04, samples=59 00:17:50.904 iops : min=13788, max=31364, avg=28142.24, stdev=4048.01, samples=59 00:17:50.904 write: IOPS=14.1k, BW=54.9MiB/s (57.6MB/s)(3295MiB/60002msec); 0 zone resets 00:17:50.904 slat (nsec): min=941, max=197859, avg=5102.94, stdev=1972.63 00:17:50.904 clat (usec): min=783, max=30256k, avg=4582.08, stdev=263492.49 00:17:50.904 lat (usec): min=788, max=30256k, avg=4587.19, stdev=263492.48 00:17:50.904 clat percentiles (usec): 00:17:50.904 | 1.00th=[ 1827], 5.00th=[ 1942], 10.00th=[ 1975], 20.00th=[ 1991], 00:17:50.904 | 30.00th=[ 2024], 40.00th=[ 2040], 50.00th=[ 2057], 60.00th=[ 2089], 00:17:50.904 | 70.00th=[ 2409], 80.00th=[ 2540], 90.00th=[ 2671], 95.00th=[ 3032], 00:17:50.904 | 99.00th=[ 5145], 99.50th=[ 5669], 99.90th=[ 7308], 99.95th=[ 8717], 00:17:50.904 | 99.99th=[13435] 00:17:50.904 bw ( KiB/s): min=54816, max=125320, per=100.00%, avg=112420.07, stdev=16264.58, samples=59 00:17:50.904 iops : min=13704, max=31330, avg=28105.02, stdev=4066.15, samples=59 00:17:50.904 lat (usec) : 1000=0.01% 00:17:50.904 lat (msec) : 2=40.86%, 4=56.46%, 10=2.63%, 20=0.04%, >=2000=0.01% 00:17:50.904 cpu : usr=3.22%, sys=14.65%, ctx=57231, majf=0, minf=15 00:17:50.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:50.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:50.904 issued rwts: total=844729,843611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:50.904 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:50.904 00:17:50.904 Run status group 0 (all jobs): 00:17:50.904 READ: bw=55.0MiB/s (57.7MB/s), 55.0MiB/s-55.0MiB/s (57.7MB/s-57.7MB/s), io=3300MiB (3460MB), run=60002-60002msec 00:17:50.904 WRITE: bw=54.9MiB/s (57.6MB/s), 54.9MiB/s-54.9MiB/s (57.6MB/s-57.6MB/s), io=3295MiB (3455MB), run=60002-60002msec 00:17:50.904 00:17:50.904 Disk stats (read/write): 00:17:50.904 ublkb1: ios=841354/840247, merge=0/0, ticks=3741550/3733431, in_queue=7474981, util=99.89% 00:17:50.904 10:23:27 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:50.904 10:23:27 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.904 10:23:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:50.904 [2024-11-29 10:23:27.870770] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:50.904 [2024-11-29 10:23:27.906939] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:50.904 [2024-11-29 10:23:27.907084] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:50.904 [2024-11-29 10:23:27.916832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:50.904 [2024-11-29 10:23:27.916934] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:50.904 [2024-11-29 10:23:27.916948] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:50.904 10:23:27 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.904 10:23:27 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:50.904 10:23:27 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.904 10:23:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:50.904 [2024-11-29 10:23:27.932888] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:50.904 [2024-11-29 10:23:27.934121] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:50.904 [2024-11-29 10:23:27.934158] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:50.904 10:23:27 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.904 10:23:27 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:50.904 10:23:27 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:50.904 10:23:27 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 85200 00:17:50.904 10:23:27 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 85200 ']' 00:17:50.904 10:23:27 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 85200 00:17:50.904 10:23:27 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:50.904 10:23:27 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.904 10:23:27 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85200 00:17:50.904 10:23:27 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:50.904 10:23:27 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:50.904 killing process with pid 85200 00:17:50.904 10:23:27 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85200' 00:17:50.904 10:23:27 ublk_recovery -- common/autotest_common.sh@973 -- # kill 85200 00:17:50.904 10:23:27 ublk_recovery -- common/autotest_common.sh@978 -- # wait 85200 00:17:50.904 [2024-11-29 10:23:28.129003] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:50.904 [2024-11-29 10:23:28.129061] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:50.904 00:17:50.904 real 1m2.902s 00:17:50.904 user 1m45.661s 00:17:50.904 sys 0m20.375s 00:17:50.904 ************************************ 00:17:50.905 END TEST ublk_recovery 00:17:50.905 ************************************ 00:17:50.905 10:23:28 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.905 10:23:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:50.905 10:23:28 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:50.905 10:23:28 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:50.905 10:23:28 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:50.905 10:23:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:50.905 10:23:28 -- common/autotest_common.sh@10 -- # set +x 00:17:50.905 10:23:28 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:50.905 10:23:28 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:50.905 10:23:28 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:50.905 10:23:28 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:50.905 10:23:28 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:50.905 10:23:28 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:50.905 10:23:28 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:50.905 10:23:28 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:50.905 10:23:28 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:50.905 10:23:28 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:50.905 10:23:28 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:50.905 10:23:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:50.905 10:23:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.905 10:23:28 -- common/autotest_common.sh@10 -- # set +x 00:17:50.905 ************************************ 00:17:50.905 START TEST ftl 00:17:50.905 ************************************ 00:17:50.905 10:23:28 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:50.905 * Looking for test storage... 00:17:50.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.905 10:23:28 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:50.905 10:23:28 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:17:50.905 10:23:28 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:50.905 10:23:28 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:50.905 10:23:28 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:50.905 10:23:28 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:50.905 10:23:28 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:50.905 10:23:28 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.905 10:23:28 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:50.905 10:23:28 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:50.905 10:23:28 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:50.905 10:23:28 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:50.905 10:23:28 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:50.905 10:23:28 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:50.905 10:23:28 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:50.905 10:23:28 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:50.905 10:23:28 ftl -- scripts/common.sh@345 -- # : 1 00:17:50.905 10:23:28 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:50.905 10:23:28 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.905 10:23:28 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:50.905 10:23:28 ftl -- scripts/common.sh@353 -- # local d=1 00:17:50.905 10:23:28 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.905 10:23:28 ftl -- scripts/common.sh@355 -- # echo 1 00:17:50.905 10:23:28 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:50.905 10:23:28 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:50.905 10:23:28 ftl -- scripts/common.sh@353 -- # local d=2 00:17:50.905 10:23:28 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.905 10:23:28 ftl -- scripts/common.sh@355 -- # echo 2 00:17:50.905 10:23:28 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:50.905 10:23:28 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:50.905 10:23:28 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:50.905 10:23:28 ftl -- scripts/common.sh@368 -- # return 0 00:17:50.905 10:23:28 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.905 10:23:28 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:50.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.905 --rc genhtml_branch_coverage=1 00:17:50.905 --rc genhtml_function_coverage=1 00:17:50.905 --rc genhtml_legend=1 00:17:50.905 --rc geninfo_all_blocks=1 00:17:50.905 --rc geninfo_unexecuted_blocks=1 00:17:50.905 00:17:50.905 ' 00:17:50.905 10:23:28 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:50.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.905 --rc genhtml_branch_coverage=1 00:17:50.905 --rc genhtml_function_coverage=1 00:17:50.905 --rc genhtml_legend=1 00:17:50.905 --rc geninfo_all_blocks=1 00:17:50.905 --rc geninfo_unexecuted_blocks=1 00:17:50.905 00:17:50.905 ' 00:17:50.905 10:23:28 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:50.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.905 --rc genhtml_branch_coverage=1 00:17:50.905 --rc genhtml_function_coverage=1 00:17:50.905 --rc genhtml_legend=1 00:17:50.905 --rc geninfo_all_blocks=1 00:17:50.905 --rc geninfo_unexecuted_blocks=1 00:17:50.905 00:17:50.905 ' 00:17:50.905 10:23:28 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:50.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.905 --rc genhtml_branch_coverage=1 00:17:50.905 --rc genhtml_function_coverage=1 00:17:50.905 --rc genhtml_legend=1 00:17:50.905 --rc geninfo_all_blocks=1 00:17:50.905 --rc geninfo_unexecuted_blocks=1 00:17:50.905 00:17:50.905 ' 00:17:50.905 10:23:28 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:50.905 10:23:28 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:50.905 10:23:28 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.905 10:23:28 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.905 10:23:28 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:50.905 10:23:28 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:50.905 10:23:28 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.905 10:23:28 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:50.905 10:23:28 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:50.905 10:23:28 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.905 10:23:28 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.905 10:23:28 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:50.905 10:23:28 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:50.905 10:23:28 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:50.905 10:23:28 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:50.905 10:23:28 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:50.905 10:23:28 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:50.905 10:23:28 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.905 10:23:28 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.905 10:23:28 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:50.905 10:23:28 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:50.905 10:23:28 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:50.905 10:23:28 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:50.905 10:23:28 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:50.905 10:23:28 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:50.905 10:23:28 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:50.905 10:23:28 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:50.905 10:23:28 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:50.905 10:23:28 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:50.905 10:23:28 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.905 10:23:28 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:50.905 10:23:28 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:50.905 10:23:28 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:50.905 10:23:28 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:50.905 10:23:28 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:50.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:50.905 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:50.905 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:50.905 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:50.905 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:50.905 10:23:29 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=85996 00:17:50.905 10:23:29 ftl -- ftl/ftl.sh@38 -- # waitforlisten 85996 00:17:50.905 10:23:29 ftl -- common/autotest_common.sh@835 -- # '[' -z 85996 ']' 00:17:50.905 10:23:29 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:50.905 10:23:29 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.905 10:23:29 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.905 10:23:29 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.905 10:23:29 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.905 10:23:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:50.905 [2024-11-29 10:23:29.258247] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:17:50.905 [2024-11-29 10:23:29.258392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85996 ] 00:17:50.905 [2024-11-29 10:23:29.402903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.905 [2024-11-29 10:23:29.432309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.905 10:23:30 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.905 10:23:30 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:50.905 10:23:30 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:50.905 10:23:30 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:51.477 10:23:30 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:51.477 10:23:30 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:52.065 10:23:31 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:52.065 10:23:31 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:52.065 10:23:31 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:52.065 10:23:31 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:52.065 10:23:31 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:52.065 10:23:31 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:52.065 10:23:31 ftl -- ftl/ftl.sh@50 -- # break 00:17:52.065 10:23:31 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:52.066 10:23:31 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:52.066 10:23:31 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:52.066 10:23:31 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:52.361 10:23:31 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:52.361 10:23:31 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:52.361 10:23:31 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:52.361 10:23:31 ftl -- ftl/ftl.sh@63 -- # break 00:17:52.361 10:23:31 ftl -- ftl/ftl.sh@66 -- # killprocess 85996 00:17:52.361 10:23:31 ftl -- common/autotest_common.sh@954 -- # '[' -z 85996 ']' 00:17:52.361 10:23:31 ftl -- common/autotest_common.sh@958 -- # kill -0 85996 00:17:52.361 10:23:31 ftl -- common/autotest_common.sh@959 -- # uname 00:17:52.361 10:23:31 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.361 10:23:31 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85996 00:17:52.361 10:23:31 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.361 killing process with pid 85996 00:17:52.361 10:23:31 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.361 10:23:31 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85996' 00:17:52.361 10:23:31 ftl -- common/autotest_common.sh@973 -- # kill 85996 00:17:52.361 10:23:31 ftl -- common/autotest_common.sh@978 -- # wait 85996 00:17:52.644 10:23:31 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:52.644 10:23:31 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:52.644 10:23:31 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:52.644 10:23:31 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.644 10:23:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:52.644 ************************************ 00:17:52.644 START TEST ftl_fio_basic 00:17:52.644 ************************************ 00:17:52.644 10:23:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:52.644 * Looking for test storage... 00:17:52.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.644 10:23:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:52.644 10:23:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:17:52.644 10:23:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:52.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.906 --rc genhtml_branch_coverage=1 00:17:52.906 --rc genhtml_function_coverage=1 00:17:52.906 --rc genhtml_legend=1 00:17:52.906 --rc geninfo_all_blocks=1 00:17:52.906 --rc geninfo_unexecuted_blocks=1 00:17:52.906 00:17:52.906 ' 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:52.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.906 --rc genhtml_branch_coverage=1 00:17:52.906 --rc genhtml_function_coverage=1 00:17:52.906 --rc genhtml_legend=1 00:17:52.906 --rc geninfo_all_blocks=1 00:17:52.906 --rc geninfo_unexecuted_blocks=1 00:17:52.906 00:17:52.906 ' 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:52.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.906 --rc genhtml_branch_coverage=1 00:17:52.906 --rc genhtml_function_coverage=1 00:17:52.906 --rc genhtml_legend=1 00:17:52.906 --rc geninfo_all_blocks=1 00:17:52.906 --rc geninfo_unexecuted_blocks=1 00:17:52.906 00:17:52.906 ' 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:52.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.906 --rc genhtml_branch_coverage=1 00:17:52.906 --rc genhtml_function_coverage=1 00:17:52.906 --rc genhtml_legend=1 00:17:52.906 --rc geninfo_all_blocks=1 00:17:52.906 --rc geninfo_unexecuted_blocks=1 00:17:52.906 00:17:52.906 ' 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:52.906 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=86112 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 86112 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 86112 ']' 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:52.907 10:23:32 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:52.907 [2024-11-29 10:23:32.272580] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:17:52.907 [2024-11-29 10:23:32.272729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86112 ] 00:17:53.169 [2024-11-29 10:23:32.420775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:53.169 [2024-11-29 10:23:32.452729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.169 [2024-11-29 10:23:32.453089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.169 [2024-11-29 10:23:32.453154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.742 10:23:33 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.742 10:23:33 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:53.742 10:23:33 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:53.742 10:23:33 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:53.742 10:23:33 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:53.742 10:23:33 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:53.742 10:23:33 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:53.742 10:23:33 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:54.004 10:23:33 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:54.004 10:23:33 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:54.004 10:23:33 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:54.004 10:23:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:54.004 10:23:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:54.004 10:23:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:54.004 10:23:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:54.004 10:23:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:54.265 10:23:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:54.265 { 00:17:54.265 "name": "nvme0n1", 00:17:54.265 "aliases": [ 00:17:54.265 "c1e48f72-4621-42fd-b377-a716152028aa" 00:17:54.265 ], 00:17:54.265 "product_name": "NVMe disk", 00:17:54.265 "block_size": 4096, 00:17:54.265 "num_blocks": 1310720, 00:17:54.265 "uuid": "c1e48f72-4621-42fd-b377-a716152028aa", 00:17:54.265 "numa_id": -1, 00:17:54.265 "assigned_rate_limits": { 00:17:54.265 "rw_ios_per_sec": 0, 00:17:54.265 "rw_mbytes_per_sec": 0, 00:17:54.265 "r_mbytes_per_sec": 0, 00:17:54.265 "w_mbytes_per_sec": 0 00:17:54.265 }, 00:17:54.265 "claimed": false, 00:17:54.265 "zoned": false, 00:17:54.265 "supported_io_types": { 00:17:54.265 "read": true, 00:17:54.265 "write": true, 00:17:54.265 "unmap": true, 00:17:54.265 "flush": true, 00:17:54.265 "reset": true, 00:17:54.265 "nvme_admin": true, 00:17:54.265 "nvme_io": true, 00:17:54.265 "nvme_io_md": false, 00:17:54.265 "write_zeroes": true, 00:17:54.265 "zcopy": false, 00:17:54.265 "get_zone_info": false, 00:17:54.265 "zone_management": false, 00:17:54.265 "zone_append": false, 00:17:54.265 "compare": true, 00:17:54.265 "compare_and_write": false, 00:17:54.265 "abort": true, 00:17:54.265 "seek_hole": false, 00:17:54.265 "seek_data": false, 00:17:54.265 "copy": true, 00:17:54.265 "nvme_iov_md": false 00:17:54.265 }, 00:17:54.265 "driver_specific": { 00:17:54.265 "nvme": [ 00:17:54.265 { 00:17:54.265 "pci_address": "0000:00:11.0", 00:17:54.265 "trid": { 00:17:54.265 "trtype": "PCIe", 00:17:54.265 "traddr": "0000:00:11.0" 00:17:54.265 }, 00:17:54.265 "ctrlr_data": { 00:17:54.265 "cntlid": 0, 00:17:54.265 "vendor_id": "0x1b36", 00:17:54.265 "model_number": "QEMU NVMe Ctrl", 00:17:54.265 "serial_number": "12341", 00:17:54.265 "firmware_revision": "8.0.0", 00:17:54.265 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:54.265 "oacs": { 00:17:54.265 "security": 0, 00:17:54.265 "format": 1, 00:17:54.265 "firmware": 0, 00:17:54.265 "ns_manage": 1 00:17:54.265 }, 00:17:54.265 "multi_ctrlr": false, 00:17:54.265 "ana_reporting": false 00:17:54.265 }, 00:17:54.265 "vs": { 00:17:54.265 "nvme_version": "1.4" 00:17:54.265 }, 00:17:54.265 "ns_data": { 00:17:54.265 "id": 1, 00:17:54.265 "can_share": false 00:17:54.265 } 00:17:54.265 } 00:17:54.265 ], 00:17:54.265 "mp_policy": "active_passive" 00:17:54.265 } 00:17:54.265 } 00:17:54.265 ]' 00:17:54.265 10:23:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:54.265 10:23:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:54.265 10:23:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:54.265 10:23:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:54.265 10:23:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:54.265 10:23:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:54.265 10:23:33 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:54.265 10:23:33 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:54.265 10:23:33 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:54.265 10:23:33 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:54.265 10:23:33 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:54.527 10:23:33 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:54.528 10:23:33 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:54.789 10:23:34 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=6db3a611-303b-4eed-be5a-3ce07c53e999 00:17:54.789 10:23:34 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6db3a611-303b-4eed-be5a-3ce07c53e999 00:17:55.050 10:23:34 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=8cd5c74c-a11c-4b5b-92cc-642540262b24 00:17:55.050 10:23:34 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8cd5c74c-a11c-4b5b-92cc-642540262b24 00:17:55.050 10:23:34 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:55.050 10:23:34 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:55.050 10:23:34 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=8cd5c74c-a11c-4b5b-92cc-642540262b24 00:17:55.050 10:23:34 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:55.050 10:23:34 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 8cd5c74c-a11c-4b5b-92cc-642540262b24 00:17:55.050 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=8cd5c74c-a11c-4b5b-92cc-642540262b24 00:17:55.050 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:55.050 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:55.050 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:55.050 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8cd5c74c-a11c-4b5b-92cc-642540262b24 00:17:55.310 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:55.310 { 00:17:55.310 "name": "8cd5c74c-a11c-4b5b-92cc-642540262b24", 00:17:55.310 "aliases": [ 00:17:55.310 "lvs/nvme0n1p0" 00:17:55.310 ], 00:17:55.310 "product_name": "Logical Volume", 00:17:55.310 "block_size": 4096, 00:17:55.310 "num_blocks": 26476544, 00:17:55.310 "uuid": "8cd5c74c-a11c-4b5b-92cc-642540262b24", 00:17:55.310 "assigned_rate_limits": { 00:17:55.310 "rw_ios_per_sec": 0, 00:17:55.310 "rw_mbytes_per_sec": 0, 00:17:55.310 "r_mbytes_per_sec": 0, 00:17:55.310 "w_mbytes_per_sec": 0 00:17:55.310 }, 00:17:55.310 "claimed": false, 00:17:55.310 "zoned": false, 00:17:55.310 "supported_io_types": { 00:17:55.310 "read": true, 00:17:55.310 "write": true, 00:17:55.310 "unmap": true, 00:17:55.310 "flush": false, 00:17:55.310 "reset": true, 00:17:55.310 "nvme_admin": false, 00:17:55.310 "nvme_io": false, 00:17:55.310 "nvme_io_md": false, 00:17:55.310 "write_zeroes": true, 00:17:55.310 "zcopy": false, 00:17:55.310 "get_zone_info": false, 00:17:55.310 "zone_management": false, 00:17:55.310 "zone_append": false, 00:17:55.310 "compare": false, 00:17:55.310 "compare_and_write": false, 00:17:55.310 "abort": false, 00:17:55.310 "seek_hole": true, 00:17:55.310 "seek_data": true, 00:17:55.310 "copy": false, 00:17:55.310 "nvme_iov_md": false 00:17:55.310 }, 00:17:55.310 "driver_specific": { 00:17:55.310 "lvol": { 00:17:55.310 "lvol_store_uuid": "6db3a611-303b-4eed-be5a-3ce07c53e999", 00:17:55.310 "base_bdev": "nvme0n1", 00:17:55.310 "thin_provision": true, 00:17:55.310 "num_allocated_clusters": 0, 00:17:55.310 "snapshot": false, 00:17:55.310 "clone": false, 00:17:55.310 "esnap_clone": false 00:17:55.310 } 00:17:55.310 } 00:17:55.310 } 00:17:55.310 ]' 00:17:55.310 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:55.310 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:55.310 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:55.310 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:55.310 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:55.310 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:55.310 10:23:34 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:55.310 10:23:34 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:55.310 10:23:34 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:55.568 10:23:34 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:55.568 10:23:34 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:55.568 10:23:34 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 8cd5c74c-a11c-4b5b-92cc-642540262b24 00:17:55.568 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=8cd5c74c-a11c-4b5b-92cc-642540262b24 00:17:55.568 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:55.568 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:55.568 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:55.568 10:23:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8cd5c74c-a11c-4b5b-92cc-642540262b24 00:17:55.833 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:55.833 { 00:17:55.833 "name": "8cd5c74c-a11c-4b5b-92cc-642540262b24", 00:17:55.833 "aliases": [ 00:17:55.833 "lvs/nvme0n1p0" 00:17:55.833 ], 00:17:55.834 "product_name": "Logical Volume", 00:17:55.834 "block_size": 4096, 00:17:55.834 "num_blocks": 26476544, 00:17:55.834 "uuid": "8cd5c74c-a11c-4b5b-92cc-642540262b24", 00:17:55.834 "assigned_rate_limits": { 00:17:55.834 "rw_ios_per_sec": 0, 00:17:55.834 "rw_mbytes_per_sec": 0, 00:17:55.834 "r_mbytes_per_sec": 0, 00:17:55.834 "w_mbytes_per_sec": 0 00:17:55.834 }, 00:17:55.834 "claimed": false, 00:17:55.834 "zoned": false, 00:17:55.834 "supported_io_types": { 00:17:55.834 "read": true, 00:17:55.834 "write": true, 00:17:55.834 "unmap": true, 00:17:55.834 "flush": false, 00:17:55.834 "reset": true, 00:17:55.834 "nvme_admin": false, 00:17:55.834 "nvme_io": false, 00:17:55.834 "nvme_io_md": false, 00:17:55.834 "write_zeroes": true, 00:17:55.834 "zcopy": false, 00:17:55.834 "get_zone_info": false, 00:17:55.834 "zone_management": false, 00:17:55.834 "zone_append": false, 00:17:55.834 "compare": false, 00:17:55.834 "compare_and_write": false, 00:17:55.834 "abort": false, 00:17:55.834 "seek_hole": true, 00:17:55.834 "seek_data": true, 00:17:55.834 "copy": false, 00:17:55.834 "nvme_iov_md": false 00:17:55.834 }, 00:17:55.834 "driver_specific": { 00:17:55.834 "lvol": { 00:17:55.834 "lvol_store_uuid": "6db3a611-303b-4eed-be5a-3ce07c53e999", 00:17:55.834 "base_bdev": "nvme0n1", 00:17:55.834 "thin_provision": true, 00:17:55.834 "num_allocated_clusters": 0, 00:17:55.834 "snapshot": false, 00:17:55.834 "clone": false, 00:17:55.834 "esnap_clone": false 00:17:55.834 } 00:17:55.834 } 00:17:55.834 } 00:17:55.834 ]' 00:17:55.834 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:55.834 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:55.834 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:55.834 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:55.834 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:55.834 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:55.834 10:23:35 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:55.834 10:23:35 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:56.093 10:23:35 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:56.093 10:23:35 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:56.093 10:23:35 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:56.093 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:56.093 10:23:35 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 8cd5c74c-a11c-4b5b-92cc-642540262b24 00:17:56.093 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=8cd5c74c-a11c-4b5b-92cc-642540262b24 00:17:56.093 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:56.093 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:56.093 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:56.093 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8cd5c74c-a11c-4b5b-92cc-642540262b24 00:17:56.093 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:56.093 { 00:17:56.093 "name": "8cd5c74c-a11c-4b5b-92cc-642540262b24", 00:17:56.093 "aliases": [ 00:17:56.093 "lvs/nvme0n1p0" 00:17:56.093 ], 00:17:56.093 "product_name": "Logical Volume", 00:17:56.093 "block_size": 4096, 00:17:56.093 "num_blocks": 26476544, 00:17:56.093 "uuid": "8cd5c74c-a11c-4b5b-92cc-642540262b24", 00:17:56.093 "assigned_rate_limits": { 00:17:56.093 "rw_ios_per_sec": 0, 00:17:56.093 "rw_mbytes_per_sec": 0, 00:17:56.093 "r_mbytes_per_sec": 0, 00:17:56.093 "w_mbytes_per_sec": 0 00:17:56.093 }, 00:17:56.093 "claimed": false, 00:17:56.093 "zoned": false, 00:17:56.093 "supported_io_types": { 00:17:56.093 "read": true, 00:17:56.093 "write": true, 00:17:56.093 "unmap": true, 00:17:56.093 "flush": false, 00:17:56.093 "reset": true, 00:17:56.093 "nvme_admin": false, 00:17:56.093 "nvme_io": false, 00:17:56.093 "nvme_io_md": false, 00:17:56.093 "write_zeroes": true, 00:17:56.093 "zcopy": false, 00:17:56.093 "get_zone_info": false, 00:17:56.093 "zone_management": false, 00:17:56.093 "zone_append": false, 00:17:56.093 "compare": false, 00:17:56.093 "compare_and_write": false, 00:17:56.093 "abort": false, 00:17:56.093 "seek_hole": true, 00:17:56.093 "seek_data": true, 00:17:56.093 "copy": false, 00:17:56.093 "nvme_iov_md": false 00:17:56.093 }, 00:17:56.093 "driver_specific": { 00:17:56.093 "lvol": { 00:17:56.093 "lvol_store_uuid": "6db3a611-303b-4eed-be5a-3ce07c53e999", 00:17:56.094 "base_bdev": "nvme0n1", 00:17:56.094 "thin_provision": true, 00:17:56.094 "num_allocated_clusters": 0, 00:17:56.094 "snapshot": false, 00:17:56.094 "clone": false, 00:17:56.094 "esnap_clone": false 00:17:56.094 } 00:17:56.094 } 00:17:56.094 } 00:17:56.094 ]' 00:17:56.094 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:56.352 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:56.352 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:56.352 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:56.352 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:56.352 10:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:56.352 10:23:35 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:56.352 10:23:35 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:56.352 10:23:35 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8cd5c74c-a11c-4b5b-92cc-642540262b24 -c nvc0n1p0 --l2p_dram_limit 60 00:17:56.352 [2024-11-29 10:23:35.805701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.352 [2024-11-29 10:23:35.805744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:56.352 [2024-11-29 10:23:35.805755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:56.352 [2024-11-29 10:23:35.805771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.352 [2024-11-29 10:23:35.805845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.352 [2024-11-29 10:23:35.805855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:56.352 [2024-11-29 10:23:35.805861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:56.352 [2024-11-29 10:23:35.805871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.352 [2024-11-29 10:23:35.805897] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:56.352 [2024-11-29 10:23:35.806119] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:56.352 [2024-11-29 10:23:35.806135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.352 [2024-11-29 10:23:35.806143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:56.352 [2024-11-29 10:23:35.806150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:17:56.352 [2024-11-29 10:23:35.806159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.352 [2024-11-29 10:23:35.806239] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8e37a78e-ba99-4169-b584-26e962482e17 00:17:56.352 [2024-11-29 10:23:35.807264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.352 [2024-11-29 10:23:35.807286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:56.352 [2024-11-29 10:23:35.807296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:56.352 [2024-11-29 10:23:35.807303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.352 [2024-11-29 10:23:35.812398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.352 [2024-11-29 10:23:35.812424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:56.353 [2024-11-29 10:23:35.812436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.038 ms 00:17:56.353 [2024-11-29 10:23:35.812441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.353 [2024-11-29 10:23:35.812522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.353 [2024-11-29 10:23:35.812534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:56.353 [2024-11-29 10:23:35.812543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:56.353 [2024-11-29 10:23:35.812548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.353 [2024-11-29 10:23:35.812620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.353 [2024-11-29 10:23:35.812633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:56.353 [2024-11-29 10:23:35.812642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:56.353 [2024-11-29 10:23:35.812651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.353 [2024-11-29 10:23:35.812677] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:56.353 [2024-11-29 10:23:35.813964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.353 [2024-11-29 10:23:35.813991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:56.353 [2024-11-29 10:23:35.814007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:17:56.353 [2024-11-29 10:23:35.814014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.353 [2024-11-29 10:23:35.814046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.353 [2024-11-29 10:23:35.814055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:56.353 [2024-11-29 10:23:35.814061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:56.353 [2024-11-29 10:23:35.814071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.353 [2024-11-29 10:23:35.814115] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:56.353 [2024-11-29 10:23:35.814231] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:56.353 [2024-11-29 10:23:35.814244] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:56.353 [2024-11-29 10:23:35.814254] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:56.353 [2024-11-29 10:23:35.814264] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:56.353 [2024-11-29 10:23:35.814273] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:56.353 [2024-11-29 10:23:35.814280] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:56.353 [2024-11-29 10:23:35.814296] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:56.353 [2024-11-29 10:23:35.814301] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:56.353 [2024-11-29 10:23:35.814308] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:56.353 [2024-11-29 10:23:35.814313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.353 [2024-11-29 10:23:35.814320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:56.353 [2024-11-29 10:23:35.814337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:17:56.353 [2024-11-29 10:23:35.814344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.353 [2024-11-29 10:23:35.814425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.353 [2024-11-29 10:23:35.814446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:56.353 [2024-11-29 10:23:35.814452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:56.353 [2024-11-29 10:23:35.814459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.353 [2024-11-29 10:23:35.814546] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:56.353 [2024-11-29 10:23:35.814555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:56.353 [2024-11-29 10:23:35.814561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:56.353 [2024-11-29 10:23:35.814579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.353 [2024-11-29 10:23:35.814585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:56.353 [2024-11-29 10:23:35.814592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:56.353 [2024-11-29 10:23:35.814598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:56.353 [2024-11-29 10:23:35.814605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:56.353 [2024-11-29 10:23:35.814610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:56.353 [2024-11-29 10:23:35.814617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:56.353 [2024-11-29 10:23:35.814623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:56.353 [2024-11-29 10:23:35.814630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:56.353 [2024-11-29 10:23:35.814636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:56.353 [2024-11-29 10:23:35.814644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:56.353 [2024-11-29 10:23:35.814651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:56.353 [2024-11-29 10:23:35.814658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.353 [2024-11-29 10:23:35.814664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:56.353 [2024-11-29 10:23:35.814672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:56.353 [2024-11-29 10:23:35.814680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.353 [2024-11-29 10:23:35.814687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:56.353 [2024-11-29 10:23:35.814693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:56.353 [2024-11-29 10:23:35.814700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.353 [2024-11-29 10:23:35.814705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:56.353 [2024-11-29 10:23:35.814712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:56.353 [2024-11-29 10:23:35.814718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.353 [2024-11-29 10:23:35.814725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:56.353 [2024-11-29 10:23:35.814731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:56.353 [2024-11-29 10:23:35.814738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.353 [2024-11-29 10:23:35.814743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:56.353 [2024-11-29 10:23:35.814752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:56.353 [2024-11-29 10:23:35.814757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.353 [2024-11-29 10:23:35.814765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:56.353 [2024-11-29 10:23:35.814771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:56.353 [2024-11-29 10:23:35.814778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:56.353 [2024-11-29 10:23:35.814783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:56.353 [2024-11-29 10:23:35.814791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:56.353 [2024-11-29 10:23:35.814796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:56.353 [2024-11-29 10:23:35.814813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:56.353 [2024-11-29 10:23:35.814818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:56.353 [2024-11-29 10:23:35.814825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.353 [2024-11-29 10:23:35.814831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:56.353 [2024-11-29 10:23:35.814838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:56.353 [2024-11-29 10:23:35.814844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.353 [2024-11-29 10:23:35.814851] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:56.353 [2024-11-29 10:23:35.814860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:56.353 [2024-11-29 10:23:35.814871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:56.353 [2024-11-29 10:23:35.814876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.353 [2024-11-29 10:23:35.814884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:56.353 [2024-11-29 10:23:35.814890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:56.353 [2024-11-29 10:23:35.814898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:56.353 [2024-11-29 10:23:35.814908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:56.353 [2024-11-29 10:23:35.814916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:56.353 [2024-11-29 10:23:35.814921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:56.353 [2024-11-29 10:23:35.814931] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:56.353 [2024-11-29 10:23:35.814939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:56.353 [2024-11-29 10:23:35.814957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:56.353 [2024-11-29 10:23:35.814962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:56.353 [2024-11-29 10:23:35.814969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:56.353 [2024-11-29 10:23:35.814975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:56.353 [2024-11-29 10:23:35.814981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:56.353 [2024-11-29 10:23:35.814986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:56.353 [2024-11-29 10:23:35.814994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:56.353 [2024-11-29 10:23:35.815002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:56.353 [2024-11-29 10:23:35.815009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:56.354 [2024-11-29 10:23:35.815014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:56.354 [2024-11-29 10:23:35.815020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:56.354 [2024-11-29 10:23:35.815025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:56.354 [2024-11-29 10:23:35.815032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:56.354 [2024-11-29 10:23:35.815037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:56.354 [2024-11-29 10:23:35.815043] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:56.354 [2024-11-29 10:23:35.815049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:56.354 [2024-11-29 10:23:35.815056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:56.354 [2024-11-29 10:23:35.815062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:56.354 [2024-11-29 10:23:35.815068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:56.354 [2024-11-29 10:23:35.815074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:56.354 [2024-11-29 10:23:35.815093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.354 [2024-11-29 10:23:35.815099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:56.354 [2024-11-29 10:23:35.815109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:17:56.354 [2024-11-29 10:23:35.815115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.354 [2024-11-29 10:23:35.815188] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:56.354 [2024-11-29 10:23:35.815198] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:59.632 [2024-11-29 10:23:38.504461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.504519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:59.632 [2024-11-29 10:23:38.504546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2689.258 ms 00:17:59.632 [2024-11-29 10:23:38.504556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.513169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.513218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:59.632 [2024-11-29 10:23:38.513234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.528 ms 00:17:59.632 [2024-11-29 10:23:38.513242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.513357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.513373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:59.632 [2024-11-29 10:23:38.513392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:59.632 [2024-11-29 10:23:38.513409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.532026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.532071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:59.632 [2024-11-29 10:23:38.532084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.563 ms 00:17:59.632 [2024-11-29 10:23:38.532093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.532134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.532143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:59.632 [2024-11-29 10:23:38.532153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:59.632 [2024-11-29 10:23:38.532160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.532518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.532549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:59.632 [2024-11-29 10:23:38.532562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:59.632 [2024-11-29 10:23:38.532582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.532713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.532724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:59.632 [2024-11-29 10:23:38.532735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:59.632 [2024-11-29 10:23:38.532754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.538820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.538856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:59.632 [2024-11-29 10:23:38.538870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.030 ms 00:17:59.632 [2024-11-29 10:23:38.538882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.548543] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:59.632 [2024-11-29 10:23:38.563167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.563202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:59.632 [2024-11-29 10:23:38.563212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.183 ms 00:17:59.632 [2024-11-29 10:23:38.563222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.606345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.606397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:59.632 [2024-11-29 10:23:38.606408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.086 ms 00:17:59.632 [2024-11-29 10:23:38.606420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.606608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.606625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:59.632 [2024-11-29 10:23:38.606635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:17:59.632 [2024-11-29 10:23:38.606644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.609451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.609485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:59.632 [2024-11-29 10:23:38.609495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.781 ms 00:17:59.632 [2024-11-29 10:23:38.609504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.611954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.611989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:59.632 [2024-11-29 10:23:38.612000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.386 ms 00:17:59.632 [2024-11-29 10:23:38.612011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.612314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.612337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:59.632 [2024-11-29 10:23:38.612346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:17:59.632 [2024-11-29 10:23:38.612357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.636509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.636545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:59.632 [2024-11-29 10:23:38.636556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.121 ms 00:17:59.632 [2024-11-29 10:23:38.636565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.640323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.640359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:59.632 [2024-11-29 10:23:38.640370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.692 ms 00:17:59.632 [2024-11-29 10:23:38.640380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.643144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.643178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:59.632 [2024-11-29 10:23:38.643187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.723 ms 00:17:59.632 [2024-11-29 10:23:38.643196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.646119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.646154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:59.632 [2024-11-29 10:23:38.646165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.881 ms 00:17:59.632 [2024-11-29 10:23:38.646176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.646234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.646247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:59.632 [2024-11-29 10:23:38.646256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:59.632 [2024-11-29 10:23:38.646266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.646335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.632 [2024-11-29 10:23:38.646358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:59.632 [2024-11-29 10:23:38.646366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:59.632 [2024-11-29 10:23:38.646376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.632 [2024-11-29 10:23:38.647288] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2841.184 ms, result 0 00:17:59.632 { 00:17:59.632 "name": "ftl0", 00:17:59.632 "uuid": "8e37a78e-ba99-4169-b584-26e962482e17" 00:17:59.632 } 00:17:59.632 10:23:38 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:59.632 10:23:38 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:59.632 10:23:38 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:59.632 10:23:38 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:59.632 10:23:38 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:59.633 10:23:38 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:59.633 10:23:38 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:59.633 10:23:38 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:59.633 [ 00:17:59.633 { 00:17:59.633 "name": "ftl0", 00:17:59.633 "aliases": [ 00:17:59.633 "8e37a78e-ba99-4169-b584-26e962482e17" 00:17:59.633 ], 00:17:59.633 "product_name": "FTL disk", 00:17:59.633 "block_size": 4096, 00:17:59.633 "num_blocks": 20971520, 00:17:59.633 "uuid": "8e37a78e-ba99-4169-b584-26e962482e17", 00:17:59.633 "assigned_rate_limits": { 00:17:59.633 "rw_ios_per_sec": 0, 00:17:59.633 "rw_mbytes_per_sec": 0, 00:17:59.633 "r_mbytes_per_sec": 0, 00:17:59.633 "w_mbytes_per_sec": 0 00:17:59.633 }, 00:17:59.633 "claimed": false, 00:17:59.633 "zoned": false, 00:17:59.633 "supported_io_types": { 00:17:59.633 "read": true, 00:17:59.633 "write": true, 00:17:59.633 "unmap": true, 00:17:59.633 "flush": true, 00:17:59.633 "reset": false, 00:17:59.633 "nvme_admin": false, 00:17:59.633 "nvme_io": false, 00:17:59.633 "nvme_io_md": false, 00:17:59.633 "write_zeroes": true, 00:17:59.633 "zcopy": false, 00:17:59.633 "get_zone_info": false, 00:17:59.633 "zone_management": false, 00:17:59.633 "zone_append": false, 00:17:59.633 "compare": false, 00:17:59.633 "compare_and_write": false, 00:17:59.633 "abort": false, 00:17:59.633 "seek_hole": false, 00:17:59.633 "seek_data": false, 00:17:59.633 "copy": false, 00:17:59.633 "nvme_iov_md": false 00:17:59.633 }, 00:17:59.633 "driver_specific": { 00:17:59.633 "ftl": { 00:17:59.633 "base_bdev": "8cd5c74c-a11c-4b5b-92cc-642540262b24", 00:17:59.633 "cache": "nvc0n1p0" 00:17:59.633 } 00:17:59.633 } 00:17:59.633 } 00:17:59.633 ] 00:17:59.633 10:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:59.633 10:23:39 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:59.633 10:23:39 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:59.891 10:23:39 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:59.891 10:23:39 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:00.151 [2024-11-29 10:23:39.465992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.151 [2024-11-29 10:23:39.466030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:00.151 [2024-11-29 10:23:39.466043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:00.151 [2024-11-29 10:23:39.466052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.151 [2024-11-29 10:23:39.466104] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:00.151 [2024-11-29 10:23:39.466572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.151 [2024-11-29 10:23:39.466604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:00.151 [2024-11-29 10:23:39.466613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:18:00.151 [2024-11-29 10:23:39.466622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.151 [2024-11-29 10:23:39.467153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.151 [2024-11-29 10:23:39.467177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:00.151 [2024-11-29 10:23:39.467187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:18:00.151 [2024-11-29 10:23:39.467197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.151 [2024-11-29 10:23:39.470444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.151 [2024-11-29 10:23:39.470469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:00.151 [2024-11-29 10:23:39.470488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.221 ms 00:18:00.151 [2024-11-29 10:23:39.470499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.151 [2024-11-29 10:23:39.476609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.151 [2024-11-29 10:23:39.476637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:00.151 [2024-11-29 10:23:39.476647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.084 ms 00:18:00.151 [2024-11-29 10:23:39.476667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.151 [2024-11-29 10:23:39.478004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.151 [2024-11-29 10:23:39.478041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:00.151 [2024-11-29 10:23:39.478050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:18:00.151 [2024-11-29 10:23:39.478059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.151 [2024-11-29 10:23:39.482206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.151 [2024-11-29 10:23:39.482244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:00.151 [2024-11-29 10:23:39.482253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.104 ms 00:18:00.151 [2024-11-29 10:23:39.482263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.151 [2024-11-29 10:23:39.482441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.151 [2024-11-29 10:23:39.482458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:00.151 [2024-11-29 10:23:39.482466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:18:00.151 [2024-11-29 10:23:39.482475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.151 [2024-11-29 10:23:39.483609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.151 [2024-11-29 10:23:39.483644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:00.151 [2024-11-29 10:23:39.483653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.108 ms 00:18:00.151 [2024-11-29 10:23:39.483661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.151 [2024-11-29 10:23:39.484779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.151 [2024-11-29 10:23:39.484821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:00.151 [2024-11-29 10:23:39.484831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:18:00.151 [2024-11-29 10:23:39.484839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.151 [2024-11-29 10:23:39.485704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.151 [2024-11-29 10:23:39.485738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:00.151 [2024-11-29 10:23:39.485747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:18:00.151 [2024-11-29 10:23:39.485756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.151 [2024-11-29 10:23:39.486579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.151 [2024-11-29 10:23:39.486614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:00.151 [2024-11-29 10:23:39.486623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:18:00.151 [2024-11-29 10:23:39.486631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.151 [2024-11-29 10:23:39.486673] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:00.151 [2024-11-29 10:23:39.486688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:00.151 [2024-11-29 10:23:39.486708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.486999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:00.152 [2024-11-29 10:23:39.487502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:00.153 [2024-11-29 10:23:39.487511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:00.153 [2024-11-29 10:23:39.487519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:00.153 [2024-11-29 10:23:39.487527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:00.153 [2024-11-29 10:23:39.487535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:00.153 [2024-11-29 10:23:39.487544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:00.153 [2024-11-29 10:23:39.487553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:00.153 [2024-11-29 10:23:39.487565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:00.153 [2024-11-29 10:23:39.487573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:00.153 [2024-11-29 10:23:39.487592] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:00.153 [2024-11-29 10:23:39.487606] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8e37a78e-ba99-4169-b584-26e962482e17 00:18:00.153 [2024-11-29 10:23:39.487616] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:00.153 [2024-11-29 10:23:39.487624] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:00.153 [2024-11-29 10:23:39.487632] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:00.153 [2024-11-29 10:23:39.487640] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:00.153 [2024-11-29 10:23:39.487648] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:00.153 [2024-11-29 10:23:39.487655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:00.153 [2024-11-29 10:23:39.487663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:00.153 [2024-11-29 10:23:39.487669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:00.153 [2024-11-29 10:23:39.487677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:00.153 [2024-11-29 10:23:39.487684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.153 [2024-11-29 10:23:39.487692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:00.153 [2024-11-29 10:23:39.487700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:18:00.153 [2024-11-29 10:23:39.487709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.153 [2024-11-29 10:23:39.489222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.153 [2024-11-29 10:23:39.489249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:00.153 [2024-11-29 10:23:39.489258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.489 ms 00:18:00.153 [2024-11-29 10:23:39.489267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.153 [2024-11-29 10:23:39.489362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.153 [2024-11-29 10:23:39.489380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:00.153 [2024-11-29 10:23:39.489391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:00.153 [2024-11-29 10:23:39.489400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.153 [2024-11-29 10:23:39.494754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.153 [2024-11-29 10:23:39.494791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:00.153 [2024-11-29 10:23:39.494812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.153 [2024-11-29 10:23:39.494823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.153 [2024-11-29 10:23:39.494884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.153 [2024-11-29 10:23:39.494904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:00.153 [2024-11-29 10:23:39.494915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.153 [2024-11-29 10:23:39.494924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.153 [2024-11-29 10:23:39.494985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.153 [2024-11-29 10:23:39.495007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:00.153 [2024-11-29 10:23:39.495015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.153 [2024-11-29 10:23:39.495024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.153 [2024-11-29 10:23:39.495048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.153 [2024-11-29 10:23:39.495058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:00.153 [2024-11-29 10:23:39.495065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.153 [2024-11-29 10:23:39.495074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.153 [2024-11-29 10:23:39.504634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.153 [2024-11-29 10:23:39.504676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:00.153 [2024-11-29 10:23:39.504687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.153 [2024-11-29 10:23:39.504707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.153 [2024-11-29 10:23:39.512605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.153 [2024-11-29 10:23:39.512646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:00.153 [2024-11-29 10:23:39.512669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.153 [2024-11-29 10:23:39.512682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.153 [2024-11-29 10:23:39.512749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.153 [2024-11-29 10:23:39.512763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:00.153 [2024-11-29 10:23:39.512771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.153 [2024-11-29 10:23:39.512790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.153 [2024-11-29 10:23:39.512857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.153 [2024-11-29 10:23:39.512869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:00.153 [2024-11-29 10:23:39.512877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.153 [2024-11-29 10:23:39.512886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.153 [2024-11-29 10:23:39.512970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.153 [2024-11-29 10:23:39.512981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:00.153 [2024-11-29 10:23:39.512989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.153 [2024-11-29 10:23:39.512997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.153 [2024-11-29 10:23:39.513047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.153 [2024-11-29 10:23:39.513059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:00.153 [2024-11-29 10:23:39.513067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.153 [2024-11-29 10:23:39.513076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.153 [2024-11-29 10:23:39.513121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.153 [2024-11-29 10:23:39.513134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:00.153 [2024-11-29 10:23:39.513141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.153 [2024-11-29 10:23:39.513150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.153 [2024-11-29 10:23:39.513205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.153 [2024-11-29 10:23:39.513216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:00.153 [2024-11-29 10:23:39.513225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.153 [2024-11-29 10:23:39.513235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.153 [2024-11-29 10:23:39.513406] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.370 ms, result 0 00:18:00.153 true 00:18:00.153 10:23:39 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 86112 00:18:00.153 10:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 86112 ']' 00:18:00.153 10:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 86112 00:18:00.153 10:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:00.153 10:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.153 10:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86112 00:18:00.153 10:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.153 killing process with pid 86112 00:18:00.153 10:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.153 10:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86112' 00:18:00.153 10:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 86112 00:18:00.153 10:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 86112 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:06.713 10:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:06.713 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:06.713 fio-3.35 00:18:06.713 Starting 1 thread 00:18:10.901 00:18:10.901 test: (groupid=0, jobs=1): err= 0: pid=86281: Fri Nov 29 10:23:49 2024 00:18:10.901 read: IOPS=1084, BW=72.0MiB/s (75.5MB/s)(255MiB/3534msec) 00:18:10.901 slat (nsec): min=3081, max=28851, avg=4538.22, stdev=1888.83 00:18:10.901 clat (usec): min=254, max=1211, avg=412.95, stdev=149.93 00:18:10.901 lat (usec): min=258, max=1216, avg=417.49, stdev=150.48 00:18:10.901 clat percentiles (usec): 00:18:10.901 | 1.00th=[ 273], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 310], 00:18:10.901 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 396], 00:18:10.901 | 70.00th=[ 449], 80.00th=[ 523], 90.00th=[ 619], 95.00th=[ 775], 00:18:10.901 | 99.00th=[ 857], 99.50th=[ 914], 99.90th=[ 996], 99.95th=[ 1074], 00:18:10.901 | 99.99th=[ 1205] 00:18:10.901 write: IOPS=1092, BW=72.6MiB/s (76.1MB/s)(256MiB/3528msec); 0 zone resets 00:18:10.901 slat (usec): min=13, max=111, avg=18.08, stdev= 3.52 00:18:10.901 clat (usec): min=276, max=1362, avg=470.15, stdev=169.46 00:18:10.901 lat (usec): min=297, max=1381, avg=488.23, stdev=170.25 00:18:10.901 clat percentiles (usec): 00:18:10.901 | 1.00th=[ 310], 5.00th=[ 330], 10.00th=[ 330], 20.00th=[ 334], 00:18:10.901 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 392], 60.00th=[ 457], 00:18:10.901 | 70.00th=[ 562], 80.00th=[ 611], 90.00th=[ 717], 95.00th=[ 840], 00:18:10.901 | 99.00th=[ 955], 99.50th=[ 1004], 99.90th=[ 1123], 99.95th=[ 1172], 00:18:10.901 | 99.99th=[ 1369] 00:18:10.901 bw ( KiB/s): min=52904, max=91528, per=100.00%, avg=74430.86, stdev=16696.04, samples=7 00:18:10.901 iops : min= 778, max= 1346, avg=1094.57, stdev=245.53, samples=7 00:18:10.901 lat (usec) : 500=70.61%, 750=22.06%, 1000=7.04% 00:18:10.901 lat (msec) : 2=0.30% 00:18:10.901 cpu : usr=99.32%, sys=0.08%, ctx=7, majf=0, minf=1326 00:18:10.901 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:10.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.901 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.901 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:10.901 00:18:10.901 Run status group 0 (all jobs): 00:18:10.901 READ: bw=72.0MiB/s (75.5MB/s), 72.0MiB/s-72.0MiB/s (75.5MB/s-75.5MB/s), io=255MiB (267MB), run=3534-3534msec 00:18:10.901 WRITE: bw=72.6MiB/s (76.1MB/s), 72.6MiB/s-72.6MiB/s (76.1MB/s-76.1MB/s), io=256MiB (269MB), run=3528-3528msec 00:18:10.901 ----------------------------------------------------- 00:18:10.901 Suppressions used: 00:18:10.901 count bytes template 00:18:10.901 1 5 /usr/src/fio/parse.c 00:18:10.901 1 8 libtcmalloc_minimal.so 00:18:10.901 1 904 libcrypto.so 00:18:10.901 ----------------------------------------------------- 00:18:10.901 00:18:10.901 10:23:50 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:10.901 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:10.901 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:10.902 10:23:50 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:10.902 10:23:50 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:10.902 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:10.902 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:10.902 10:23:50 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:10.902 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:10.902 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:10.902 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:10.902 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:10.902 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:10.902 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:10.902 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:10.902 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:10.902 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:10.902 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:10.902 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:11.163 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:11.163 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:11.163 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:11.163 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:11.163 10:23:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:11.163 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:11.163 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:11.163 fio-3.35 00:18:11.163 Starting 2 threads 00:18:43.311 00:18:43.311 first_half: (groupid=0, jobs=1): err= 0: pid=86373: Fri Nov 29 10:24:20 2024 00:18:43.311 read: IOPS=2288, BW=9154KiB/s (9374kB/s)(255MiB/28538msec) 00:18:43.311 slat (usec): min=3, max=536, avg= 6.15, stdev= 3.29 00:18:43.311 clat (usec): min=722, max=400704, avg=43148.48, stdev=32260.87 00:18:43.311 lat (usec): min=729, max=400709, avg=43154.63, stdev=32261.21 00:18:43.311 clat percentiles (msec): 00:18:43.311 | 1.00th=[ 16], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:18:43.311 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:18:43.311 | 70.00th=[ 40], 80.00th=[ 44], 90.00th=[ 53], 95.00th=[ 82], 00:18:43.311 | 99.00th=[ 215], 99.50th=[ 266], 99.90th=[ 359], 99.95th=[ 376], 00:18:43.311 | 99.99th=[ 393] 00:18:43.311 write: IOPS=2675, BW=10.5MiB/s (11.0MB/s)(256MiB/24492msec); 0 zone resets 00:18:43.311 slat (usec): min=4, max=3679, avg= 7.98, stdev=24.08 00:18:43.311 clat (usec): min=346, max=124646, avg=12713.04, stdev=19106.79 00:18:43.311 lat (usec): min=357, max=124651, avg=12721.01, stdev=19106.88 00:18:43.311 clat percentiles (usec): 00:18:43.311 | 1.00th=[ 799], 5.00th=[ 1037], 10.00th=[ 1483], 20.00th=[ 2409], 00:18:43.311 | 30.00th=[ 4555], 40.00th=[ 5932], 50.00th=[ 7635], 60.00th=[ 9241], 00:18:43.311 | 70.00th=[ 11338], 80.00th=[ 14353], 90.00th=[ 21365], 95.00th=[ 54789], 00:18:43.311 | 99.00th=[103285], 99.50th=[107480], 99.90th=[116917], 99.95th=[117965], 00:18:43.311 | 99.99th=[123208] 00:18:43.311 bw ( KiB/s): min= 1992, max=40016, per=100.00%, avg=21845.33, stdev=9614.19, samples=24 00:18:43.311 iops : min= 498, max=10004, avg=5461.33, stdev=2403.55, samples=24 00:18:43.311 lat (usec) : 500=0.01%, 750=0.27%, 1000=1.95% 00:18:43.311 lat (msec) : 2=6.08%, 4=5.55%, 10=18.40%, 20=12.81%, 50=46.18% 00:18:43.311 lat (msec) : 100=6.19%, 250=2.26%, 500=0.31% 00:18:43.311 cpu : usr=98.15%, sys=0.41%, ctx=389, majf=0, minf=5527 00:18:43.311 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:43.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.311 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:43.311 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:43.311 second_half: (groupid=0, jobs=1): err= 0: pid=86374: Fri Nov 29 10:24:20 2024 00:18:43.311 read: IOPS=2273, BW=9092KiB/s (9310kB/s)(255MiB/28734msec) 00:18:43.311 slat (usec): min=3, max=133, avg= 5.22, stdev= 2.29 00:18:43.311 clat (usec): min=624, max=405709, avg=42156.66, stdev=34403.30 00:18:43.311 lat (usec): min=629, max=405713, avg=42161.88, stdev=34403.71 00:18:43.311 clat percentiles (msec): 00:18:43.311 | 1.00th=[ 10], 5.00th=[ 27], 10.00th=[ 31], 20.00th=[ 32], 00:18:43.311 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 35], 60.00th=[ 37], 00:18:43.311 | 70.00th=[ 39], 80.00th=[ 43], 90.00th=[ 51], 95.00th=[ 73], 00:18:43.311 | 99.00th=[ 236], 99.50th=[ 271], 99.90th=[ 351], 99.95th=[ 355], 00:18:43.311 | 99.99th=[ 405] 00:18:43.311 write: IOPS=2474, BW=9897KiB/s (10.1MB/s)(256MiB/26486msec); 0 zone resets 00:18:43.311 slat (usec): min=3, max=2105, avg= 7.83, stdev=16.39 00:18:43.311 clat (usec): min=423, max=126446, avg=14086.27, stdev=20971.75 00:18:43.311 lat (usec): min=430, max=126453, avg=14094.09, stdev=20971.98 00:18:43.311 clat percentiles (usec): 00:18:43.311 | 1.00th=[ 791], 5.00th=[ 1057], 10.00th=[ 1401], 20.00th=[ 1991], 00:18:43.311 | 30.00th=[ 2769], 40.00th=[ 4948], 50.00th=[ 7373], 60.00th=[ 9503], 00:18:43.311 | 70.00th=[ 11994], 80.00th=[ 16712], 90.00th=[ 37487], 95.00th=[ 60031], 00:18:43.311 | 99.00th=[104334], 99.50th=[108528], 99.90th=[117965], 99.95th=[122160], 00:18:43.311 | 99.99th=[125305] 00:18:43.311 bw ( KiB/s): min= 896, max=47072, per=94.60%, avg=18726.29, stdev=11071.28, samples=28 00:18:43.311 iops : min= 224, max=11768, avg=4681.57, stdev=2767.82, samples=28 00:18:43.311 lat (usec) : 500=0.01%, 750=0.33%, 1000=1.77% 00:18:43.311 lat (msec) : 2=8.01%, 4=8.21%, 10=13.57%, 20=12.37%, 50=47.05% 00:18:43.311 lat (msec) : 100=6.04%, 250=2.21%, 500=0.43% 00:18:43.311 cpu : usr=99.22%, sys=0.16%, ctx=49, majf=0, minf=5605 00:18:43.311 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:43.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.311 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:43.311 issued rwts: total=65313,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:43.311 00:18:43.311 Run status group 0 (all jobs): 00:18:43.311 READ: bw=17.8MiB/s (18.6MB/s), 9092KiB/s-9154KiB/s (9310kB/s-9374kB/s), io=510MiB (535MB), run=28538-28734msec 00:18:43.311 WRITE: bw=19.3MiB/s (20.3MB/s), 9897KiB/s-10.5MiB/s (10.1MB/s-11.0MB/s), io=512MiB (537MB), run=24492-26486msec 00:18:43.311 ----------------------------------------------------- 00:18:43.311 Suppressions used: 00:18:43.311 count bytes template 00:18:43.311 2 10 /usr/src/fio/parse.c 00:18:43.311 4 384 /usr/src/fio/iolog.c 00:18:43.311 1 8 libtcmalloc_minimal.so 00:18:43.311 1 904 libcrypto.so 00:18:43.311 ----------------------------------------------------- 00:18:43.311 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:43.311 10:24:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:43.311 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:43.311 fio-3.35 00:18:43.311 Starting 1 thread 00:19:01.436 00:19:01.436 test: (groupid=0, jobs=1): err= 0: pid=86725: Fri Nov 29 10:24:38 2024 00:19:01.436 read: IOPS=6539, BW=25.5MiB/s (26.8MB/s)(255MiB/9971msec) 00:19:01.436 slat (nsec): min=3049, max=46002, avg=5273.94, stdev=1769.52 00:19:01.436 clat (usec): min=694, max=36835, avg=19564.59, stdev=2729.27 00:19:01.436 lat (usec): min=700, max=36839, avg=19569.86, stdev=2729.42 00:19:01.436 clat percentiles (usec): 00:19:01.436 | 1.00th=[15139], 5.00th=[15533], 10.00th=[16057], 20.00th=[17433], 00:19:01.436 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19530], 60.00th=[19792], 00:19:01.436 | 70.00th=[20579], 80.00th=[21103], 90.00th=[22676], 95.00th=[24511], 00:19:01.436 | 99.00th=[28705], 99.50th=[30540], 99.90th=[32637], 99.95th=[33424], 00:19:01.436 | 99.99th=[35914] 00:19:01.436 write: IOPS=10.6k, BW=41.3MiB/s (43.3MB/s)(256MiB/6200msec); 0 zone resets 00:19:01.436 slat (usec): min=3, max=354, avg= 6.95, stdev= 4.05 00:19:01.436 clat (usec): min=542, max=65230, avg=12053.20, stdev=15487.26 00:19:01.436 lat (usec): min=548, max=65238, avg=12060.15, stdev=15487.34 00:19:01.436 clat percentiles (usec): 00:19:01.436 | 1.00th=[ 930], 5.00th=[ 1237], 10.00th=[ 1434], 20.00th=[ 1745], 00:19:01.436 | 30.00th=[ 2114], 40.00th=[ 2999], 50.00th=[ 7046], 60.00th=[ 8291], 00:19:01.436 | 70.00th=[ 9896], 80.00th=[14222], 90.00th=[42730], 95.00th=[49546], 00:19:01.436 | 99.00th=[57934], 99.50th=[60031], 99.90th=[62129], 99.95th=[62653], 00:19:01.436 | 99.99th=[64226] 00:19:01.436 bw ( KiB/s): min=15360, max=67952, per=95.38%, avg=40329.85, stdev=13663.04, samples=13 00:19:01.436 iops : min= 3840, max=16988, avg=10082.46, stdev=3415.76, samples=13 00:19:01.436 lat (usec) : 750=0.12%, 1000=0.69% 00:19:01.436 lat (msec) : 2=12.70%, 4=7.36%, 10=14.67%, 20=37.39%, 50=24.73% 00:19:01.436 lat (msec) : 100=2.33% 00:19:01.436 cpu : usr=98.96%, sys=0.20%, ctx=34, majf=0, minf=5577 00:19:01.436 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:01.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.436 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:01.436 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:01.436 00:19:01.436 Run status group 0 (all jobs): 00:19:01.436 READ: bw=25.5MiB/s (26.8MB/s), 25.5MiB/s-25.5MiB/s (26.8MB/s-26.8MB/s), io=255MiB (267MB), run=9971-9971msec 00:19:01.436 WRITE: bw=41.3MiB/s (43.3MB/s), 41.3MiB/s-41.3MiB/s (43.3MB/s-43.3MB/s), io=256MiB (268MB), run=6200-6200msec 00:19:01.436 ----------------------------------------------------- 00:19:01.436 Suppressions used: 00:19:01.436 count bytes template 00:19:01.436 1 5 /usr/src/fio/parse.c 00:19:01.436 2 192 /usr/src/fio/iolog.c 00:19:01.436 1 8 libtcmalloc_minimal.so 00:19:01.436 1 904 libcrypto.so 00:19:01.436 ----------------------------------------------------- 00:19:01.436 00:19:01.436 10:24:39 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:01.436 10:24:39 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.436 10:24:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:01.436 10:24:39 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:01.436 Remove shared memory files 00:19:01.436 10:24:39 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:01.436 10:24:39 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:01.436 10:24:39 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:01.436 10:24:39 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:01.436 10:24:39 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69103 /dev/shm/spdk_tgt_trace.pid85056 00:19:01.436 10:24:39 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:01.436 10:24:39 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:01.436 00:19:01.436 real 1m7.334s 00:19:01.436 user 2m35.010s 00:19:01.436 sys 0m3.104s 00:19:01.436 10:24:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.436 ************************************ 00:19:01.436 END TEST ftl_fio_basic 00:19:01.436 ************************************ 00:19:01.436 10:24:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:01.436 10:24:39 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:01.436 10:24:39 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:01.436 10:24:39 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.436 10:24:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:01.436 ************************************ 00:19:01.436 START TEST ftl_bdevperf 00:19:01.436 ************************************ 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:01.436 * Looking for test storage... 00:19:01.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:01.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.436 --rc genhtml_branch_coverage=1 00:19:01.436 --rc genhtml_function_coverage=1 00:19:01.436 --rc genhtml_legend=1 00:19:01.436 --rc geninfo_all_blocks=1 00:19:01.436 --rc geninfo_unexecuted_blocks=1 00:19:01.436 00:19:01.436 ' 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:01.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.436 --rc genhtml_branch_coverage=1 00:19:01.436 --rc genhtml_function_coverage=1 00:19:01.436 --rc genhtml_legend=1 00:19:01.436 --rc geninfo_all_blocks=1 00:19:01.436 --rc geninfo_unexecuted_blocks=1 00:19:01.436 00:19:01.436 ' 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:01.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.436 --rc genhtml_branch_coverage=1 00:19:01.436 --rc genhtml_function_coverage=1 00:19:01.436 --rc genhtml_legend=1 00:19:01.436 --rc geninfo_all_blocks=1 00:19:01.436 --rc geninfo_unexecuted_blocks=1 00:19:01.436 00:19:01.436 ' 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:01.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.436 --rc genhtml_branch_coverage=1 00:19:01.436 --rc genhtml_function_coverage=1 00:19:01.436 --rc genhtml_legend=1 00:19:01.436 --rc geninfo_all_blocks=1 00:19:01.436 --rc geninfo_unexecuted_blocks=1 00:19:01.436 00:19:01.436 ' 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:01.436 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=86985 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 86985 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 86985 ']' 00:19:01.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.437 10:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:01.437 [2024-11-29 10:24:39.616777] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:01.437 [2024-11-29 10:24:39.616914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86985 ] 00:19:01.437 [2024-11-29 10:24:39.760401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.437 [2024-11-29 10:24:39.788928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.437 10:24:40 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.437 10:24:40 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:01.437 10:24:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:01.437 10:24:40 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:01.437 10:24:40 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:01.437 10:24:40 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:01.437 10:24:40 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:01.437 10:24:40 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:01.437 10:24:40 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:01.437 10:24:40 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:01.437 10:24:40 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:01.437 10:24:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:01.437 10:24:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:01.437 10:24:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:01.437 10:24:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:01.437 10:24:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:01.699 10:24:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:01.699 { 00:19:01.699 "name": "nvme0n1", 00:19:01.699 "aliases": [ 00:19:01.699 "33950651-4b53-401c-99b4-7caf18c2ff0a" 00:19:01.699 ], 00:19:01.699 "product_name": "NVMe disk", 00:19:01.699 "block_size": 4096, 00:19:01.699 "num_blocks": 1310720, 00:19:01.699 "uuid": "33950651-4b53-401c-99b4-7caf18c2ff0a", 00:19:01.699 "numa_id": -1, 00:19:01.699 "assigned_rate_limits": { 00:19:01.699 "rw_ios_per_sec": 0, 00:19:01.699 "rw_mbytes_per_sec": 0, 00:19:01.699 "r_mbytes_per_sec": 0, 00:19:01.699 "w_mbytes_per_sec": 0 00:19:01.699 }, 00:19:01.699 "claimed": true, 00:19:01.699 "claim_type": "read_many_write_one", 00:19:01.699 "zoned": false, 00:19:01.699 "supported_io_types": { 00:19:01.699 "read": true, 00:19:01.699 "write": true, 00:19:01.699 "unmap": true, 00:19:01.699 "flush": true, 00:19:01.699 "reset": true, 00:19:01.699 "nvme_admin": true, 00:19:01.699 "nvme_io": true, 00:19:01.699 "nvme_io_md": false, 00:19:01.699 "write_zeroes": true, 00:19:01.699 "zcopy": false, 00:19:01.699 "get_zone_info": false, 00:19:01.699 "zone_management": false, 00:19:01.699 "zone_append": false, 00:19:01.699 "compare": true, 00:19:01.699 "compare_and_write": false, 00:19:01.699 "abort": true, 00:19:01.699 "seek_hole": false, 00:19:01.699 "seek_data": false, 00:19:01.699 "copy": true, 00:19:01.699 "nvme_iov_md": false 00:19:01.699 }, 00:19:01.699 "driver_specific": { 00:19:01.699 "nvme": [ 00:19:01.699 { 00:19:01.699 "pci_address": "0000:00:11.0", 00:19:01.699 "trid": { 00:19:01.699 "trtype": "PCIe", 00:19:01.699 "traddr": "0000:00:11.0" 00:19:01.699 }, 00:19:01.699 "ctrlr_data": { 00:19:01.699 "cntlid": 0, 00:19:01.699 "vendor_id": "0x1b36", 00:19:01.699 "model_number": "QEMU NVMe Ctrl", 00:19:01.699 "serial_number": "12341", 00:19:01.699 "firmware_revision": "8.0.0", 00:19:01.699 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:01.699 "oacs": { 00:19:01.699 "security": 0, 00:19:01.699 "format": 1, 00:19:01.699 "firmware": 0, 00:19:01.699 "ns_manage": 1 00:19:01.699 }, 00:19:01.699 "multi_ctrlr": false, 00:19:01.699 "ana_reporting": false 00:19:01.699 }, 00:19:01.699 "vs": { 00:19:01.699 "nvme_version": "1.4" 00:19:01.699 }, 00:19:01.699 "ns_data": { 00:19:01.699 "id": 1, 00:19:01.699 "can_share": false 00:19:01.699 } 00:19:01.699 } 00:19:01.699 ], 00:19:01.699 "mp_policy": "active_passive" 00:19:01.699 } 00:19:01.699 } 00:19:01.699 ]' 00:19:01.699 10:24:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:01.699 10:24:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:01.699 10:24:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:01.699 10:24:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:01.699 10:24:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:01.699 10:24:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:01.699 10:24:41 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:01.699 10:24:41 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:01.699 10:24:41 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:01.699 10:24:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:01.699 10:24:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:01.961 10:24:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=6db3a611-303b-4eed-be5a-3ce07c53e999 00:19:01.961 10:24:41 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:01.961 10:24:41 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6db3a611-303b-4eed-be5a-3ce07c53e999 00:19:02.222 10:24:41 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:02.222 10:24:41 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=be82143b-3908-478d-a53e-48c81611faa8 00:19:02.222 10:24:41 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u be82143b-3908-478d-a53e-48c81611faa8 00:19:02.483 10:24:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=c31d64ea-aae2-40a8-b95e-86436bf701b7 00:19:02.483 10:24:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c31d64ea-aae2-40a8-b95e-86436bf701b7 00:19:02.483 10:24:41 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:02.483 10:24:41 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:02.483 10:24:41 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=c31d64ea-aae2-40a8-b95e-86436bf701b7 00:19:02.483 10:24:41 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:02.483 10:24:41 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size c31d64ea-aae2-40a8-b95e-86436bf701b7 00:19:02.483 10:24:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=c31d64ea-aae2-40a8-b95e-86436bf701b7 00:19:02.483 10:24:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:02.483 10:24:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:02.483 10:24:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:02.483 10:24:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c31d64ea-aae2-40a8-b95e-86436bf701b7 00:19:02.759 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:02.759 { 00:19:02.759 "name": "c31d64ea-aae2-40a8-b95e-86436bf701b7", 00:19:02.759 "aliases": [ 00:19:02.759 "lvs/nvme0n1p0" 00:19:02.759 ], 00:19:02.759 "product_name": "Logical Volume", 00:19:02.759 "block_size": 4096, 00:19:02.759 "num_blocks": 26476544, 00:19:02.759 "uuid": "c31d64ea-aae2-40a8-b95e-86436bf701b7", 00:19:02.759 "assigned_rate_limits": { 00:19:02.759 "rw_ios_per_sec": 0, 00:19:02.759 "rw_mbytes_per_sec": 0, 00:19:02.759 "r_mbytes_per_sec": 0, 00:19:02.759 "w_mbytes_per_sec": 0 00:19:02.759 }, 00:19:02.759 "claimed": false, 00:19:02.759 "zoned": false, 00:19:02.759 "supported_io_types": { 00:19:02.759 "read": true, 00:19:02.759 "write": true, 00:19:02.759 "unmap": true, 00:19:02.759 "flush": false, 00:19:02.759 "reset": true, 00:19:02.759 "nvme_admin": false, 00:19:02.759 "nvme_io": false, 00:19:02.759 "nvme_io_md": false, 00:19:02.759 "write_zeroes": true, 00:19:02.759 "zcopy": false, 00:19:02.759 "get_zone_info": false, 00:19:02.759 "zone_management": false, 00:19:02.759 "zone_append": false, 00:19:02.759 "compare": false, 00:19:02.759 "compare_and_write": false, 00:19:02.759 "abort": false, 00:19:02.759 "seek_hole": true, 00:19:02.759 "seek_data": true, 00:19:02.759 "copy": false, 00:19:02.759 "nvme_iov_md": false 00:19:02.759 }, 00:19:02.759 "driver_specific": { 00:19:02.759 "lvol": { 00:19:02.759 "lvol_store_uuid": "be82143b-3908-478d-a53e-48c81611faa8", 00:19:02.759 "base_bdev": "nvme0n1", 00:19:02.759 "thin_provision": true, 00:19:02.759 "num_allocated_clusters": 0, 00:19:02.759 "snapshot": false, 00:19:02.759 "clone": false, 00:19:02.759 "esnap_clone": false 00:19:02.759 } 00:19:02.759 } 00:19:02.759 } 00:19:02.759 ]' 00:19:02.759 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:02.759 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:02.759 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:02.759 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:02.759 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:02.759 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:02.759 10:24:42 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:02.760 10:24:42 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:02.760 10:24:42 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:03.032 10:24:42 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:03.032 10:24:42 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:03.032 10:24:42 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size c31d64ea-aae2-40a8-b95e-86436bf701b7 00:19:03.032 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=c31d64ea-aae2-40a8-b95e-86436bf701b7 00:19:03.032 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:03.032 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:03.032 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:03.032 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c31d64ea-aae2-40a8-b95e-86436bf701b7 00:19:03.294 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:03.294 { 00:19:03.294 "name": "c31d64ea-aae2-40a8-b95e-86436bf701b7", 00:19:03.294 "aliases": [ 00:19:03.294 "lvs/nvme0n1p0" 00:19:03.294 ], 00:19:03.294 "product_name": "Logical Volume", 00:19:03.294 "block_size": 4096, 00:19:03.294 "num_blocks": 26476544, 00:19:03.294 "uuid": "c31d64ea-aae2-40a8-b95e-86436bf701b7", 00:19:03.294 "assigned_rate_limits": { 00:19:03.294 "rw_ios_per_sec": 0, 00:19:03.294 "rw_mbytes_per_sec": 0, 00:19:03.294 "r_mbytes_per_sec": 0, 00:19:03.294 "w_mbytes_per_sec": 0 00:19:03.294 }, 00:19:03.294 "claimed": false, 00:19:03.294 "zoned": false, 00:19:03.294 "supported_io_types": { 00:19:03.294 "read": true, 00:19:03.294 "write": true, 00:19:03.294 "unmap": true, 00:19:03.294 "flush": false, 00:19:03.294 "reset": true, 00:19:03.294 "nvme_admin": false, 00:19:03.294 "nvme_io": false, 00:19:03.294 "nvme_io_md": false, 00:19:03.294 "write_zeroes": true, 00:19:03.294 "zcopy": false, 00:19:03.294 "get_zone_info": false, 00:19:03.294 "zone_management": false, 00:19:03.294 "zone_append": false, 00:19:03.294 "compare": false, 00:19:03.294 "compare_and_write": false, 00:19:03.294 "abort": false, 00:19:03.294 "seek_hole": true, 00:19:03.294 "seek_data": true, 00:19:03.294 "copy": false, 00:19:03.294 "nvme_iov_md": false 00:19:03.294 }, 00:19:03.294 "driver_specific": { 00:19:03.294 "lvol": { 00:19:03.294 "lvol_store_uuid": "be82143b-3908-478d-a53e-48c81611faa8", 00:19:03.294 "base_bdev": "nvme0n1", 00:19:03.294 "thin_provision": true, 00:19:03.294 "num_allocated_clusters": 0, 00:19:03.294 "snapshot": false, 00:19:03.294 "clone": false, 00:19:03.294 "esnap_clone": false 00:19:03.294 } 00:19:03.294 } 00:19:03.294 } 00:19:03.294 ]' 00:19:03.294 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:03.294 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:03.294 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:03.294 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:03.294 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:03.294 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:03.294 10:24:42 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:03.294 10:24:42 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:03.556 10:24:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:03.556 10:24:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size c31d64ea-aae2-40a8-b95e-86436bf701b7 00:19:03.556 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=c31d64ea-aae2-40a8-b95e-86436bf701b7 00:19:03.556 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:03.556 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:03.556 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:03.556 10:24:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c31d64ea-aae2-40a8-b95e-86436bf701b7 00:19:03.817 10:24:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:03.817 { 00:19:03.817 "name": "c31d64ea-aae2-40a8-b95e-86436bf701b7", 00:19:03.817 "aliases": [ 00:19:03.817 "lvs/nvme0n1p0" 00:19:03.817 ], 00:19:03.817 "product_name": "Logical Volume", 00:19:03.817 "block_size": 4096, 00:19:03.817 "num_blocks": 26476544, 00:19:03.817 "uuid": "c31d64ea-aae2-40a8-b95e-86436bf701b7", 00:19:03.817 "assigned_rate_limits": { 00:19:03.817 "rw_ios_per_sec": 0, 00:19:03.817 "rw_mbytes_per_sec": 0, 00:19:03.817 "r_mbytes_per_sec": 0, 00:19:03.817 "w_mbytes_per_sec": 0 00:19:03.817 }, 00:19:03.817 "claimed": false, 00:19:03.817 "zoned": false, 00:19:03.817 "supported_io_types": { 00:19:03.817 "read": true, 00:19:03.817 "write": true, 00:19:03.817 "unmap": true, 00:19:03.817 "flush": false, 00:19:03.817 "reset": true, 00:19:03.817 "nvme_admin": false, 00:19:03.817 "nvme_io": false, 00:19:03.817 "nvme_io_md": false, 00:19:03.817 "write_zeroes": true, 00:19:03.817 "zcopy": false, 00:19:03.817 "get_zone_info": false, 00:19:03.817 "zone_management": false, 00:19:03.817 "zone_append": false, 00:19:03.817 "compare": false, 00:19:03.817 "compare_and_write": false, 00:19:03.817 "abort": false, 00:19:03.817 "seek_hole": true, 00:19:03.817 "seek_data": true, 00:19:03.817 "copy": false, 00:19:03.817 "nvme_iov_md": false 00:19:03.817 }, 00:19:03.817 "driver_specific": { 00:19:03.817 "lvol": { 00:19:03.817 "lvol_store_uuid": "be82143b-3908-478d-a53e-48c81611faa8", 00:19:03.817 "base_bdev": "nvme0n1", 00:19:03.817 "thin_provision": true, 00:19:03.817 "num_allocated_clusters": 0, 00:19:03.817 "snapshot": false, 00:19:03.817 "clone": false, 00:19:03.817 "esnap_clone": false 00:19:03.817 } 00:19:03.817 } 00:19:03.817 } 00:19:03.817 ]' 00:19:03.817 10:24:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:03.818 10:24:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:03.818 10:24:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:03.818 10:24:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:03.818 10:24:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:03.818 10:24:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:03.818 10:24:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:03.818 10:24:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c31d64ea-aae2-40a8-b95e-86436bf701b7 -c nvc0n1p0 --l2p_dram_limit 20 00:19:04.080 [2024-11-29 10:24:43.298518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-29 10:24:43.298569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:04.080 [2024-11-29 10:24:43.298586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:04.080 [2024-11-29 10:24:43.298598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-29 10:24:43.298654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-29 10:24:43.298664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:04.080 [2024-11-29 10:24:43.298677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:04.080 [2024-11-29 10:24:43.298685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-29 10:24:43.298707] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:04.080 [2024-11-29 10:24:43.299008] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:04.080 [2024-11-29 10:24:43.299028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-29 10:24:43.299039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:04.080 [2024-11-29 10:24:43.299052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:19:04.080 [2024-11-29 10:24:43.299061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-29 10:24:43.299101] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b17e91d5-a0e1-46ad-827e-6af5176f8902 00:19:04.080 [2024-11-29 10:24:43.300835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-29 10:24:43.300880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:04.080 [2024-11-29 10:24:43.300891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:04.080 [2024-11-29 10:24:43.300905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-29 10:24:43.308483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-29 10:24:43.308521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:04.080 [2024-11-29 10:24:43.308532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.509 ms 00:19:04.080 [2024-11-29 10:24:43.308550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-29 10:24:43.308629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-29 10:24:43.308641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:04.080 [2024-11-29 10:24:43.308652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:04.080 [2024-11-29 10:24:43.308662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-29 10:24:43.308722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-29 10:24:43.308733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:04.080 [2024-11-29 10:24:43.308741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:04.080 [2024-11-29 10:24:43.308751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-29 10:24:43.308772] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:04.080 [2024-11-29 10:24:43.310718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-29 10:24:43.310753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:04.080 [2024-11-29 10:24:43.310768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.950 ms 00:19:04.080 [2024-11-29 10:24:43.310776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-29 10:24:43.310826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-29 10:24:43.310836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:04.080 [2024-11-29 10:24:43.310849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:04.080 [2024-11-29 10:24:43.310858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-29 10:24:43.310875] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:04.080 [2024-11-29 10:24:43.311025] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:04.080 [2024-11-29 10:24:43.311039] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:04.080 [2024-11-29 10:24:43.311051] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:04.080 [2024-11-29 10:24:43.311076] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:04.080 [2024-11-29 10:24:43.311087] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:04.080 [2024-11-29 10:24:43.311100] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:04.080 [2024-11-29 10:24:43.311108] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:04.080 [2024-11-29 10:24:43.311118] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:04.080 [2024-11-29 10:24:43.311131] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:04.080 [2024-11-29 10:24:43.311142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-29 10:24:43.311150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:04.080 [2024-11-29 10:24:43.311164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:19:04.080 [2024-11-29 10:24:43.311172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-29 10:24:43.311257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-29 10:24:43.311267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:04.080 [2024-11-29 10:24:43.311276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:04.080 [2024-11-29 10:24:43.311283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-29 10:24:43.311375] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:04.080 [2024-11-29 10:24:43.311388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:04.080 [2024-11-29 10:24:43.311399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.080 [2024-11-29 10:24:43.311409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.080 [2024-11-29 10:24:43.311419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:04.080 [2024-11-29 10:24:43.311428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:04.080 [2024-11-29 10:24:43.311438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:04.080 [2024-11-29 10:24:43.311446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:04.080 [2024-11-29 10:24:43.311458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:04.080 [2024-11-29 10:24:43.311466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.080 [2024-11-29 10:24:43.311475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:04.080 [2024-11-29 10:24:43.311485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:04.080 [2024-11-29 10:24:43.311496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.080 [2024-11-29 10:24:43.311504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:04.080 [2024-11-29 10:24:43.311515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:04.081 [2024-11-29 10:24:43.311524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.081 [2024-11-29 10:24:43.311536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:04.081 [2024-11-29 10:24:43.311544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:04.081 [2024-11-29 10:24:43.311554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.081 [2024-11-29 10:24:43.311561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:04.081 [2024-11-29 10:24:43.311572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:04.081 [2024-11-29 10:24:43.311579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.081 [2024-11-29 10:24:43.311590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:04.081 [2024-11-29 10:24:43.311598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:04.081 [2024-11-29 10:24:43.311607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.081 [2024-11-29 10:24:43.311617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:04.081 [2024-11-29 10:24:43.311626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:04.081 [2024-11-29 10:24:43.311633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.081 [2024-11-29 10:24:43.311644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:04.081 [2024-11-29 10:24:43.311654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:04.081 [2024-11-29 10:24:43.311665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.081 [2024-11-29 10:24:43.311672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:04.081 [2024-11-29 10:24:43.311681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:04.081 [2024-11-29 10:24:43.311688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.081 [2024-11-29 10:24:43.311699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:04.081 [2024-11-29 10:24:43.311707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:04.081 [2024-11-29 10:24:43.311716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.081 [2024-11-29 10:24:43.311724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:04.081 [2024-11-29 10:24:43.311732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:04.081 [2024-11-29 10:24:43.311740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.081 [2024-11-29 10:24:43.311750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:04.081 [2024-11-29 10:24:43.311758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:04.081 [2024-11-29 10:24:43.311767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.081 [2024-11-29 10:24:43.311774] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:04.081 [2024-11-29 10:24:43.311786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:04.081 [2024-11-29 10:24:43.311813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.081 [2024-11-29 10:24:43.311827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.081 [2024-11-29 10:24:43.311835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:04.081 [2024-11-29 10:24:43.311844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:04.081 [2024-11-29 10:24:43.311851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:04.081 [2024-11-29 10:24:43.311860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:04.081 [2024-11-29 10:24:43.311868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:04.081 [2024-11-29 10:24:43.311877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:04.081 [2024-11-29 10:24:43.311887] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:04.081 [2024-11-29 10:24:43.311902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.081 [2024-11-29 10:24:43.311911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:04.081 [2024-11-29 10:24:43.311922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:04.081 [2024-11-29 10:24:43.311929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:04.081 [2024-11-29 10:24:43.311938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:04.081 [2024-11-29 10:24:43.311946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:04.081 [2024-11-29 10:24:43.311957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:04.081 [2024-11-29 10:24:43.311965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:04.081 [2024-11-29 10:24:43.311981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:04.081 [2024-11-29 10:24:43.311988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:04.081 [2024-11-29 10:24:43.311997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:04.081 [2024-11-29 10:24:43.312004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:04.081 [2024-11-29 10:24:43.312013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:04.081 [2024-11-29 10:24:43.312020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:04.081 [2024-11-29 10:24:43.312029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:04.081 [2024-11-29 10:24:43.312036] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:04.081 [2024-11-29 10:24:43.312052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.081 [2024-11-29 10:24:43.312061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:04.081 [2024-11-29 10:24:43.312070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:04.081 [2024-11-29 10:24:43.312077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:04.081 [2024-11-29 10:24:43.312085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:04.081 [2024-11-29 10:24:43.312093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.081 [2024-11-29 10:24:43.312110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:04.081 [2024-11-29 10:24:43.312117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:19:04.081 [2024-11-29 10:24:43.312126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.081 [2024-11-29 10:24:43.312158] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:04.081 [2024-11-29 10:24:43.312174] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:08.291 [2024-11-29 10:24:47.045946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.046013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:08.291 [2024-11-29 10:24:47.046030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3733.707 ms 00:19:08.291 [2024-11-29 10:24:47.046042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.055561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.055612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:08.291 [2024-11-29 10:24:47.055624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.423 ms 00:19:08.291 [2024-11-29 10:24:47.055635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.055718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.055729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:08.291 [2024-11-29 10:24:47.055744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:08.291 [2024-11-29 10:24:47.055754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.073363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.073408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:08.291 [2024-11-29 10:24:47.073425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.564 ms 00:19:08.291 [2024-11-29 10:24:47.073434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.073466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.073479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:08.291 [2024-11-29 10:24:47.073487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:08.291 [2024-11-29 10:24:47.073497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.073884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.073906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:08.291 [2024-11-29 10:24:47.073915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:19:08.291 [2024-11-29 10:24:47.073926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.074034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.074047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:08.291 [2024-11-29 10:24:47.074059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:08.291 [2024-11-29 10:24:47.074069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.079890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.079926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:08.291 [2024-11-29 10:24:47.079937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.804 ms 00:19:08.291 [2024-11-29 10:24:47.079949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.089456] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:08.291 [2024-11-29 10:24:47.094617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.094643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:08.291 [2024-11-29 10:24:47.094655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.597 ms 00:19:08.291 [2024-11-29 10:24:47.094663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.155165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.155206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:08.291 [2024-11-29 10:24:47.155221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.476 ms 00:19:08.291 [2024-11-29 10:24:47.155238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.155414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.155425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:08.291 [2024-11-29 10:24:47.155435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:19:08.291 [2024-11-29 10:24:47.155442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.159476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.159508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:08.291 [2024-11-29 10:24:47.159520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.990 ms 00:19:08.291 [2024-11-29 10:24:47.159527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.163204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.163235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:08.291 [2024-11-29 10:24:47.163246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.634 ms 00:19:08.291 [2024-11-29 10:24:47.163254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.163553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.163565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:08.291 [2024-11-29 10:24:47.163577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:19:08.291 [2024-11-29 10:24:47.163590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.196449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.196483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:08.291 [2024-11-29 10:24:47.196496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.837 ms 00:19:08.291 [2024-11-29 10:24:47.196504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.201791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.201836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:08.291 [2024-11-29 10:24:47.201848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.236 ms 00:19:08.291 [2024-11-29 10:24:47.201856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.206071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.206122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:08.291 [2024-11-29 10:24:47.206134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.172 ms 00:19:08.291 [2024-11-29 10:24:47.206141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.211256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.211289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:08.291 [2024-11-29 10:24:47.211303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.078 ms 00:19:08.291 [2024-11-29 10:24:47.211310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.211349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.211362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:08.291 [2024-11-29 10:24:47.211373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:08.291 [2024-11-29 10:24:47.211380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.211446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.291 [2024-11-29 10:24:47.211456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:08.291 [2024-11-29 10:24:47.211466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:08.291 [2024-11-29 10:24:47.211474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.291 [2024-11-29 10:24:47.212432] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3913.427 ms, result 0 00:19:08.291 { 00:19:08.291 "name": "ftl0", 00:19:08.291 "uuid": "b17e91d5-a0e1-46ad-827e-6af5176f8902" 00:19:08.291 } 00:19:08.291 10:24:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:08.291 10:24:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:08.291 10:24:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:08.292 10:24:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:08.292 [2024-11-29 10:24:47.548621] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:08.292 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:08.292 Zero copy mechanism will not be used. 00:19:08.292 Running I/O for 4 seconds... 00:19:10.179 642.00 IOPS, 42.63 MiB/s [2024-11-29T10:24:50.581Z] 683.00 IOPS, 45.36 MiB/s [2024-11-29T10:24:51.965Z] 830.00 IOPS, 55.12 MiB/s [2024-11-29T10:24:51.965Z] 793.00 IOPS, 52.66 MiB/s 00:19:12.500 Latency(us) 00:19:12.500 [2024-11-29T10:24:51.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.500 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:12.500 ftl0 : 4.00 792.80 52.65 0.00 0.00 1330.70 169.35 3276.80 00:19:12.500 [2024-11-29T10:24:51.965Z] =================================================================================================================== 00:19:12.500 [2024-11-29T10:24:51.965Z] Total : 792.80 52.65 0.00 0.00 1330.70 169.35 3276.80 00:19:12.500 { 00:19:12.500 "results": [ 00:19:12.500 { 00:19:12.500 "job": "ftl0", 00:19:12.500 "core_mask": "0x1", 00:19:12.500 "workload": "randwrite", 00:19:12.500 "status": "finished", 00:19:12.500 "queue_depth": 1, 00:19:12.500 "io_size": 69632, 00:19:12.500 "runtime": 4.002254, 00:19:12.500 "iops": 792.8032553656014, 00:19:12.500 "mibps": 52.64709117662197, 00:19:12.500 "io_failed": 0, 00:19:12.500 "io_timeout": 0, 00:19:12.500 "avg_latency_us": 1330.7032548667846, 00:19:12.500 "min_latency_us": 169.35384615384615, 00:19:12.500 "max_latency_us": 3276.8 00:19:12.500 } 00:19:12.500 ], 00:19:12.500 "core_count": 1 00:19:12.500 } 00:19:12.500 [2024-11-29 10:24:51.556692] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:12.500 10:24:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:12.500 [2024-11-29 10:24:51.661204] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:12.500 Running I/O for 4 seconds... 00:19:14.402 7837.00 IOPS, 30.61 MiB/s [2024-11-29T10:24:54.804Z] 6701.50 IOPS, 26.18 MiB/s [2024-11-29T10:24:55.745Z] 6325.33 IOPS, 24.71 MiB/s [2024-11-29T10:24:55.745Z] 5997.00 IOPS, 23.43 MiB/s 00:19:16.280 Latency(us) 00:19:16.280 [2024-11-29T10:24:55.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.280 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:16.280 ftl0 : 4.03 5986.57 23.39 0.00 0.00 21314.52 275.69 46782.62 00:19:16.280 [2024-11-29T10:24:55.745Z] =================================================================================================================== 00:19:16.280 [2024-11-29T10:24:55.745Z] Total : 5986.57 23.39 0.00 0.00 21314.52 0.00 46782.62 00:19:16.280 { 00:19:16.280 "results": [ 00:19:16.280 { 00:19:16.280 "job": "ftl0", 00:19:16.280 "core_mask": "0x1", 00:19:16.280 "workload": "randwrite", 00:19:16.280 "status": "finished", 00:19:16.280 "queue_depth": 128, 00:19:16.280 "io_size": 4096, 00:19:16.280 "runtime": 4.028349, 00:19:16.280 "iops": 5986.571669932273, 00:19:16.280 "mibps": 23.38504558567294, 00:19:16.280 "io_failed": 0, 00:19:16.280 "io_timeout": 0, 00:19:16.280 "avg_latency_us": 21314.524201742857, 00:19:16.280 "min_latency_us": 275.6923076923077, 00:19:16.280 "max_latency_us": 46782.621538461535 00:19:16.280 } 00:19:16.280 ], 00:19:16.280 "core_count": 1 00:19:16.280 } 00:19:16.280 [2024-11-29 10:24:55.695472] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:16.280 10:24:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:16.541 [2024-11-29 10:24:55.813061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:16.541 Running I/O for 4 seconds... 00:19:18.425 5255.00 IOPS, 20.53 MiB/s [2024-11-29T10:24:58.832Z] 5892.50 IOPS, 23.02 MiB/s [2024-11-29T10:25:00.217Z] 5705.33 IOPS, 22.29 MiB/s [2024-11-29T10:25:00.218Z] 5378.75 IOPS, 21.01 MiB/s 00:19:20.753 Latency(us) 00:19:20.753 [2024-11-29T10:25:00.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.753 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:20.753 Verification LBA range: start 0x0 length 0x1400000 00:19:20.753 ftl0 : 4.01 5392.34 21.06 0.00 0.00 23670.99 226.86 56865.08 00:19:20.753 [2024-11-29T10:25:00.218Z] =================================================================================================================== 00:19:20.753 [2024-11-29T10:25:00.218Z] Total : 5392.34 21.06 0.00 0.00 23670.99 0.00 56865.08 00:19:20.753 [2024-11-29 10:24:59.835263] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:20.753 { 00:19:20.753 "results": [ 00:19:20.753 { 00:19:20.753 "job": "ftl0", 00:19:20.753 "core_mask": "0x1", 00:19:20.753 "workload": "verify", 00:19:20.753 "status": "finished", 00:19:20.753 "verify_range": { 00:19:20.753 "start": 0, 00:19:20.753 "length": 20971520 00:19:20.753 }, 00:19:20.753 "queue_depth": 128, 00:19:20.753 "io_size": 4096, 00:19:20.753 "runtime": 4.013659, 00:19:20.753 "iops": 5392.336518872181, 00:19:20.753 "mibps": 21.063814526844457, 00:19:20.753 "io_failed": 0, 00:19:20.753 "io_timeout": 0, 00:19:20.753 "avg_latency_us": 23670.991290131115, 00:19:20.753 "min_latency_us": 226.85538461538462, 00:19:20.753 "max_latency_us": 56865.083076923074 00:19:20.753 } 00:19:20.753 ], 00:19:20.753 "core_count": 1 00:19:20.753 } 00:19:20.753 10:24:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:20.753 [2024-11-29 10:25:00.039634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.753 [2024-11-29 10:25:00.039684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:20.753 [2024-11-29 10:25:00.039705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:20.753 [2024-11-29 10:25:00.039714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.753 [2024-11-29 10:25:00.039741] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:20.753 [2024-11-29 10:25:00.040493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.753 [2024-11-29 10:25:00.040533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:20.753 [2024-11-29 10:25:00.040545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:19:20.753 [2024-11-29 10:25:00.040560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.753 [2024-11-29 10:25:00.043780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.753 [2024-11-29 10:25:00.043841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:20.753 [2024-11-29 10:25:00.043853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.189 ms 00:19:20.753 [2024-11-29 10:25:00.043866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.016 [2024-11-29 10:25:00.270184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.016 [2024-11-29 10:25:00.270239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:21.016 [2024-11-29 10:25:00.270257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 226.298 ms 00:19:21.016 [2024-11-29 10:25:00.270268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.016 [2024-11-29 10:25:00.276501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.016 [2024-11-29 10:25:00.276541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:21.016 [2024-11-29 10:25:00.276553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.192 ms 00:19:21.016 [2024-11-29 10:25:00.276564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.016 [2024-11-29 10:25:00.279589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.016 [2024-11-29 10:25:00.279638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:21.016 [2024-11-29 10:25:00.279648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.967 ms 00:19:21.016 [2024-11-29 10:25:00.279659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.016 [2024-11-29 10:25:00.286649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.016 [2024-11-29 10:25:00.286772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:21.016 [2024-11-29 10:25:00.286838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.940 ms 00:19:21.016 [2024-11-29 10:25:00.286890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.016 [2024-11-29 10:25:00.287321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.016 [2024-11-29 10:25:00.287391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:21.016 [2024-11-29 10:25:00.287422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:19:21.016 [2024-11-29 10:25:00.287452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.016 [2024-11-29 10:25:00.291118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.016 [2024-11-29 10:25:00.291209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:21.016 [2024-11-29 10:25:00.291240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.625 ms 00:19:21.016 [2024-11-29 10:25:00.291270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.016 [2024-11-29 10:25:00.294444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.016 [2024-11-29 10:25:00.294529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:21.016 [2024-11-29 10:25:00.294553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.079 ms 00:19:21.016 [2024-11-29 10:25:00.294565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.016 [2024-11-29 10:25:00.296649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.016 [2024-11-29 10:25:00.296709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:21.016 [2024-11-29 10:25:00.296723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.027 ms 00:19:21.016 [2024-11-29 10:25:00.296740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.016 [2024-11-29 10:25:00.298734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.016 [2024-11-29 10:25:00.298791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:21.016 [2024-11-29 10:25:00.298818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.889 ms 00:19:21.016 [2024-11-29 10:25:00.298832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.016 [2024-11-29 10:25:00.298887] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:21.016 [2024-11-29 10:25:00.298923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.298938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.298953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.298966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.298981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.298994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:21.016 [2024-11-29 10:25:00.299315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.299986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:21.017 [2024-11-29 10:25:00.300476] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:21.017 [2024-11-29 10:25:00.300491] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b17e91d5-a0e1-46ad-827e-6af5176f8902 00:19:21.017 [2024-11-29 10:25:00.300522] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:21.017 [2024-11-29 10:25:00.300535] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:21.017 [2024-11-29 10:25:00.300552] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:21.017 [2024-11-29 10:25:00.300566] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:21.017 [2024-11-29 10:25:00.300589] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:21.017 [2024-11-29 10:25:00.300604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:21.017 [2024-11-29 10:25:00.300620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:21.017 [2024-11-29 10:25:00.300631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:21.017 [2024-11-29 10:25:00.300645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:21.017 [2024-11-29 10:25:00.300659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.017 [2024-11-29 10:25:00.300680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:21.017 [2024-11-29 10:25:00.300699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.774 ms 00:19:21.017 [2024-11-29 10:25:00.300715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.017 [2024-11-29 10:25:00.303435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.017 [2024-11-29 10:25:00.303489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:21.017 [2024-11-29 10:25:00.303505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.666 ms 00:19:21.017 [2024-11-29 10:25:00.303523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.017 [2024-11-29 10:25:00.303674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.018 [2024-11-29 10:25:00.303707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:21.018 [2024-11-29 10:25:00.303723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:19:21.018 [2024-11-29 10:25:00.303746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.018 [2024-11-29 10:25:00.311916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.018 [2024-11-29 10:25:00.311971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:21.018 [2024-11-29 10:25:00.311987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.018 [2024-11-29 10:25:00.312002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.018 [2024-11-29 10:25:00.312084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.018 [2024-11-29 10:25:00.312106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:21.018 [2024-11-29 10:25:00.312120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.018 [2024-11-29 10:25:00.312136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.018 [2024-11-29 10:25:00.312237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.018 [2024-11-29 10:25:00.312257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:21.018 [2024-11-29 10:25:00.312270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.018 [2024-11-29 10:25:00.312286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.018 [2024-11-29 10:25:00.312309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.018 [2024-11-29 10:25:00.312327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:21.018 [2024-11-29 10:25:00.312343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.018 [2024-11-29 10:25:00.312362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.018 [2024-11-29 10:25:00.326663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.018 [2024-11-29 10:25:00.326734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:21.018 [2024-11-29 10:25:00.326751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.018 [2024-11-29 10:25:00.326766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.018 [2024-11-29 10:25:00.338997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.018 [2024-11-29 10:25:00.339075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:21.018 [2024-11-29 10:25:00.339101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.018 [2024-11-29 10:25:00.339115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.018 [2024-11-29 10:25:00.339217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.018 [2024-11-29 10:25:00.339237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:21.018 [2024-11-29 10:25:00.339250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.018 [2024-11-29 10:25:00.339266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.018 [2024-11-29 10:25:00.339328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.018 [2024-11-29 10:25:00.339347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:21.018 [2024-11-29 10:25:00.339362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.018 [2024-11-29 10:25:00.339385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.018 [2024-11-29 10:25:00.339500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.018 [2024-11-29 10:25:00.339520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:21.018 [2024-11-29 10:25:00.339534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.018 [2024-11-29 10:25:00.339550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.018 [2024-11-29 10:25:00.339596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.018 [2024-11-29 10:25:00.339615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:21.018 [2024-11-29 10:25:00.339632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.018 [2024-11-29 10:25:00.339649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.018 [2024-11-29 10:25:00.339713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.018 [2024-11-29 10:25:00.339731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:21.018 [2024-11-29 10:25:00.339747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.018 [2024-11-29 10:25:00.339764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.018 [2024-11-29 10:25:00.339855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.018 [2024-11-29 10:25:00.339878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:21.018 [2024-11-29 10:25:00.339894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.018 [2024-11-29 10:25:00.339919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.018 [2024-11-29 10:25:00.340120] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 300.425 ms, result 0 00:19:21.018 true 00:19:21.018 10:25:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 86985 00:19:21.018 10:25:00 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 86985 ']' 00:19:21.018 10:25:00 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 86985 00:19:21.018 10:25:00 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:21.018 10:25:00 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.018 10:25:00 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86985 00:19:21.018 killing process with pid 86985 00:19:21.018 Received shutdown signal, test time was about 4.000000 seconds 00:19:21.018 00:19:21.018 Latency(us) 00:19:21.018 [2024-11-29T10:25:00.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.018 [2024-11-29T10:25:00.483Z] =================================================================================================================== 00:19:21.018 [2024-11-29T10:25:00.483Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.018 10:25:00 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.018 10:25:00 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.018 10:25:00 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86985' 00:19:21.018 10:25:00 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 86985 00:19:21.018 10:25:00 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 86985 00:19:21.281 10:25:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:21.281 Remove shared memory files 00:19:21.281 10:25:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:21.281 10:25:00 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:21.281 10:25:00 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:21.281 10:25:00 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:21.281 10:25:00 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:21.281 10:25:00 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:21.281 10:25:00 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:21.281 00:19:21.281 real 0m21.272s 00:19:21.281 user 0m23.821s 00:19:21.281 sys 0m0.923s 00:19:21.281 ************************************ 00:19:21.281 END TEST ftl_bdevperf 00:19:21.281 ************************************ 00:19:21.281 10:25:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.281 10:25:00 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:21.281 10:25:00 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:21.281 10:25:00 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:21.281 10:25:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.281 10:25:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:21.281 ************************************ 00:19:21.281 START TEST ftl_trim 00:19:21.281 ************************************ 00:19:21.281 10:25:00 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:21.543 * Looking for test storage... 00:19:21.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:21.543 10:25:00 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:21.543 10:25:00 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:21.543 10:25:00 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:21.543 10:25:00 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:21.543 10:25:00 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:21.543 10:25:00 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:21.543 10:25:00 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:21.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.543 --rc genhtml_branch_coverage=1 00:19:21.543 --rc genhtml_function_coverage=1 00:19:21.543 --rc genhtml_legend=1 00:19:21.543 --rc geninfo_all_blocks=1 00:19:21.543 --rc geninfo_unexecuted_blocks=1 00:19:21.543 00:19:21.543 ' 00:19:21.543 10:25:00 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:21.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.543 --rc genhtml_branch_coverage=1 00:19:21.543 --rc genhtml_function_coverage=1 00:19:21.543 --rc genhtml_legend=1 00:19:21.543 --rc geninfo_all_blocks=1 00:19:21.543 --rc geninfo_unexecuted_blocks=1 00:19:21.543 00:19:21.543 ' 00:19:21.543 10:25:00 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:21.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.543 --rc genhtml_branch_coverage=1 00:19:21.543 --rc genhtml_function_coverage=1 00:19:21.543 --rc genhtml_legend=1 00:19:21.543 --rc geninfo_all_blocks=1 00:19:21.543 --rc geninfo_unexecuted_blocks=1 00:19:21.543 00:19:21.543 ' 00:19:21.543 10:25:00 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:21.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.543 --rc genhtml_branch_coverage=1 00:19:21.543 --rc genhtml_function_coverage=1 00:19:21.543 --rc genhtml_legend=1 00:19:21.543 --rc geninfo_all_blocks=1 00:19:21.543 --rc geninfo_unexecuted_blocks=1 00:19:21.543 00:19:21.543 ' 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=87326 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 87326 00:19:21.543 10:25:00 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:21.543 10:25:00 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87326 ']' 00:19:21.543 10:25:00 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.543 10:25:00 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.543 10:25:00 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.543 10:25:00 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.543 10:25:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:21.543 [2024-11-29 10:25:00.988928] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:21.543 [2024-11-29 10:25:00.989083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87326 ] 00:19:21.804 [2024-11-29 10:25:01.137915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:21.804 [2024-11-29 10:25:01.170604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.804 [2024-11-29 10:25:01.170789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.804 [2024-11-29 10:25:01.170901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.744 10:25:01 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.744 10:25:01 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:22.744 10:25:01 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:22.744 10:25:01 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:22.744 10:25:01 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:22.744 10:25:01 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:22.744 10:25:01 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:22.744 10:25:01 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:22.744 10:25:02 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:22.744 10:25:02 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:22.744 10:25:02 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:22.744 10:25:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:22.744 10:25:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:22.744 10:25:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:22.744 10:25:02 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:22.744 10:25:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:23.005 10:25:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:23.005 { 00:19:23.005 "name": "nvme0n1", 00:19:23.005 "aliases": [ 00:19:23.005 "e2dbf5fe-7e2b-4702-a181-c8e078b47d4a" 00:19:23.005 ], 00:19:23.005 "product_name": "NVMe disk", 00:19:23.005 "block_size": 4096, 00:19:23.005 "num_blocks": 1310720, 00:19:23.005 "uuid": "e2dbf5fe-7e2b-4702-a181-c8e078b47d4a", 00:19:23.005 "numa_id": -1, 00:19:23.005 "assigned_rate_limits": { 00:19:23.005 "rw_ios_per_sec": 0, 00:19:23.005 "rw_mbytes_per_sec": 0, 00:19:23.005 "r_mbytes_per_sec": 0, 00:19:23.005 "w_mbytes_per_sec": 0 00:19:23.005 }, 00:19:23.005 "claimed": true, 00:19:23.005 "claim_type": "read_many_write_one", 00:19:23.005 "zoned": false, 00:19:23.005 "supported_io_types": { 00:19:23.005 "read": true, 00:19:23.005 "write": true, 00:19:23.005 "unmap": true, 00:19:23.005 "flush": true, 00:19:23.005 "reset": true, 00:19:23.005 "nvme_admin": true, 00:19:23.005 "nvme_io": true, 00:19:23.005 "nvme_io_md": false, 00:19:23.005 "write_zeroes": true, 00:19:23.005 "zcopy": false, 00:19:23.005 "get_zone_info": false, 00:19:23.005 "zone_management": false, 00:19:23.005 "zone_append": false, 00:19:23.005 "compare": true, 00:19:23.005 "compare_and_write": false, 00:19:23.005 "abort": true, 00:19:23.005 "seek_hole": false, 00:19:23.005 "seek_data": false, 00:19:23.005 "copy": true, 00:19:23.005 "nvme_iov_md": false 00:19:23.005 }, 00:19:23.005 "driver_specific": { 00:19:23.005 "nvme": [ 00:19:23.005 { 00:19:23.005 "pci_address": "0000:00:11.0", 00:19:23.005 "trid": { 00:19:23.005 "trtype": "PCIe", 00:19:23.005 "traddr": "0000:00:11.0" 00:19:23.005 }, 00:19:23.005 "ctrlr_data": { 00:19:23.005 "cntlid": 0, 00:19:23.005 "vendor_id": "0x1b36", 00:19:23.005 "model_number": "QEMU NVMe Ctrl", 00:19:23.005 "serial_number": "12341", 00:19:23.005 "firmware_revision": "8.0.0", 00:19:23.005 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:23.005 "oacs": { 00:19:23.006 "security": 0, 00:19:23.006 "format": 1, 00:19:23.006 "firmware": 0, 00:19:23.006 "ns_manage": 1 00:19:23.006 }, 00:19:23.006 "multi_ctrlr": false, 00:19:23.006 "ana_reporting": false 00:19:23.006 }, 00:19:23.006 "vs": { 00:19:23.006 "nvme_version": "1.4" 00:19:23.006 }, 00:19:23.006 "ns_data": { 00:19:23.006 "id": 1, 00:19:23.006 "can_share": false 00:19:23.006 } 00:19:23.006 } 00:19:23.006 ], 00:19:23.006 "mp_policy": "active_passive" 00:19:23.006 } 00:19:23.006 } 00:19:23.006 ]' 00:19:23.006 10:25:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:23.006 10:25:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:23.006 10:25:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:23.006 10:25:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:23.006 10:25:02 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:23.006 10:25:02 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:23.006 10:25:02 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:23.006 10:25:02 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:23.006 10:25:02 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:23.006 10:25:02 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:23.006 10:25:02 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:23.267 10:25:02 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=be82143b-3908-478d-a53e-48c81611faa8 00:19:23.267 10:25:02 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:23.267 10:25:02 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u be82143b-3908-478d-a53e-48c81611faa8 00:19:23.529 10:25:02 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:23.790 10:25:03 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=922b0a90-50be-4185-8699-ef5f30654eb5 00:19:23.790 10:25:03 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 922b0a90-50be-4185-8699-ef5f30654eb5 00:19:24.051 10:25:03 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=0821be13-46a1-4f9b-b39a-98248387ccff 00:19:24.051 10:25:03 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0821be13-46a1-4f9b-b39a-98248387ccff 00:19:24.051 10:25:03 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:24.051 10:25:03 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:24.051 10:25:03 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=0821be13-46a1-4f9b-b39a-98248387ccff 00:19:24.051 10:25:03 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:24.051 10:25:03 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 0821be13-46a1-4f9b-b39a-98248387ccff 00:19:24.051 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=0821be13-46a1-4f9b-b39a-98248387ccff 00:19:24.051 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:24.051 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:24.051 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:24.051 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0821be13-46a1-4f9b-b39a-98248387ccff 00:19:24.051 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:24.051 { 00:19:24.051 "name": "0821be13-46a1-4f9b-b39a-98248387ccff", 00:19:24.051 "aliases": [ 00:19:24.051 "lvs/nvme0n1p0" 00:19:24.051 ], 00:19:24.051 "product_name": "Logical Volume", 00:19:24.051 "block_size": 4096, 00:19:24.051 "num_blocks": 26476544, 00:19:24.051 "uuid": "0821be13-46a1-4f9b-b39a-98248387ccff", 00:19:24.051 "assigned_rate_limits": { 00:19:24.051 "rw_ios_per_sec": 0, 00:19:24.052 "rw_mbytes_per_sec": 0, 00:19:24.052 "r_mbytes_per_sec": 0, 00:19:24.052 "w_mbytes_per_sec": 0 00:19:24.052 }, 00:19:24.052 "claimed": false, 00:19:24.052 "zoned": false, 00:19:24.052 "supported_io_types": { 00:19:24.052 "read": true, 00:19:24.052 "write": true, 00:19:24.052 "unmap": true, 00:19:24.052 "flush": false, 00:19:24.052 "reset": true, 00:19:24.052 "nvme_admin": false, 00:19:24.052 "nvme_io": false, 00:19:24.052 "nvme_io_md": false, 00:19:24.052 "write_zeroes": true, 00:19:24.052 "zcopy": false, 00:19:24.052 "get_zone_info": false, 00:19:24.052 "zone_management": false, 00:19:24.052 "zone_append": false, 00:19:24.052 "compare": false, 00:19:24.052 "compare_and_write": false, 00:19:24.052 "abort": false, 00:19:24.052 "seek_hole": true, 00:19:24.052 "seek_data": true, 00:19:24.052 "copy": false, 00:19:24.052 "nvme_iov_md": false 00:19:24.052 }, 00:19:24.052 "driver_specific": { 00:19:24.052 "lvol": { 00:19:24.052 "lvol_store_uuid": "922b0a90-50be-4185-8699-ef5f30654eb5", 00:19:24.052 "base_bdev": "nvme0n1", 00:19:24.052 "thin_provision": true, 00:19:24.052 "num_allocated_clusters": 0, 00:19:24.052 "snapshot": false, 00:19:24.052 "clone": false, 00:19:24.052 "esnap_clone": false 00:19:24.052 } 00:19:24.052 } 00:19:24.052 } 00:19:24.052 ]' 00:19:24.052 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:24.313 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:24.313 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:24.313 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:24.313 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:24.313 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:24.313 10:25:03 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:24.313 10:25:03 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:24.313 10:25:03 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:24.574 10:25:03 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:24.574 10:25:03 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:24.574 10:25:03 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 0821be13-46a1-4f9b-b39a-98248387ccff 00:19:24.574 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=0821be13-46a1-4f9b-b39a-98248387ccff 00:19:24.574 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:24.574 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:24.574 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:24.574 10:25:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0821be13-46a1-4f9b-b39a-98248387ccff 00:19:24.574 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:24.574 { 00:19:24.574 "name": "0821be13-46a1-4f9b-b39a-98248387ccff", 00:19:24.574 "aliases": [ 00:19:24.574 "lvs/nvme0n1p0" 00:19:24.574 ], 00:19:24.574 "product_name": "Logical Volume", 00:19:24.574 "block_size": 4096, 00:19:24.574 "num_blocks": 26476544, 00:19:24.574 "uuid": "0821be13-46a1-4f9b-b39a-98248387ccff", 00:19:24.574 "assigned_rate_limits": { 00:19:24.574 "rw_ios_per_sec": 0, 00:19:24.574 "rw_mbytes_per_sec": 0, 00:19:24.574 "r_mbytes_per_sec": 0, 00:19:24.574 "w_mbytes_per_sec": 0 00:19:24.574 }, 00:19:24.574 "claimed": false, 00:19:24.574 "zoned": false, 00:19:24.575 "supported_io_types": { 00:19:24.575 "read": true, 00:19:24.575 "write": true, 00:19:24.575 "unmap": true, 00:19:24.575 "flush": false, 00:19:24.575 "reset": true, 00:19:24.575 "nvme_admin": false, 00:19:24.575 "nvme_io": false, 00:19:24.575 "nvme_io_md": false, 00:19:24.575 "write_zeroes": true, 00:19:24.575 "zcopy": false, 00:19:24.575 "get_zone_info": false, 00:19:24.575 "zone_management": false, 00:19:24.575 "zone_append": false, 00:19:24.575 "compare": false, 00:19:24.575 "compare_and_write": false, 00:19:24.575 "abort": false, 00:19:24.575 "seek_hole": true, 00:19:24.575 "seek_data": true, 00:19:24.575 "copy": false, 00:19:24.575 "nvme_iov_md": false 00:19:24.575 }, 00:19:24.575 "driver_specific": { 00:19:24.575 "lvol": { 00:19:24.575 "lvol_store_uuid": "922b0a90-50be-4185-8699-ef5f30654eb5", 00:19:24.575 "base_bdev": "nvme0n1", 00:19:24.575 "thin_provision": true, 00:19:24.575 "num_allocated_clusters": 0, 00:19:24.575 "snapshot": false, 00:19:24.575 "clone": false, 00:19:24.575 "esnap_clone": false 00:19:24.575 } 00:19:24.575 } 00:19:24.575 } 00:19:24.575 ]' 00:19:24.575 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:24.836 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:24.836 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:24.836 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:24.836 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:24.836 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:24.836 10:25:04 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:24.836 10:25:04 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:24.836 10:25:04 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:24.836 10:25:04 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:24.836 10:25:04 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 0821be13-46a1-4f9b-b39a-98248387ccff 00:19:24.836 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=0821be13-46a1-4f9b-b39a-98248387ccff 00:19:24.836 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:24.836 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:24.836 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:24.836 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0821be13-46a1-4f9b-b39a-98248387ccff 00:19:25.095 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:25.095 { 00:19:25.095 "name": "0821be13-46a1-4f9b-b39a-98248387ccff", 00:19:25.095 "aliases": [ 00:19:25.095 "lvs/nvme0n1p0" 00:19:25.095 ], 00:19:25.095 "product_name": "Logical Volume", 00:19:25.095 "block_size": 4096, 00:19:25.095 "num_blocks": 26476544, 00:19:25.095 "uuid": "0821be13-46a1-4f9b-b39a-98248387ccff", 00:19:25.095 "assigned_rate_limits": { 00:19:25.095 "rw_ios_per_sec": 0, 00:19:25.095 "rw_mbytes_per_sec": 0, 00:19:25.095 "r_mbytes_per_sec": 0, 00:19:25.095 "w_mbytes_per_sec": 0 00:19:25.095 }, 00:19:25.095 "claimed": false, 00:19:25.095 "zoned": false, 00:19:25.095 "supported_io_types": { 00:19:25.095 "read": true, 00:19:25.095 "write": true, 00:19:25.095 "unmap": true, 00:19:25.095 "flush": false, 00:19:25.095 "reset": true, 00:19:25.095 "nvme_admin": false, 00:19:25.095 "nvme_io": false, 00:19:25.095 "nvme_io_md": false, 00:19:25.095 "write_zeroes": true, 00:19:25.095 "zcopy": false, 00:19:25.095 "get_zone_info": false, 00:19:25.095 "zone_management": false, 00:19:25.095 "zone_append": false, 00:19:25.095 "compare": false, 00:19:25.095 "compare_and_write": false, 00:19:25.095 "abort": false, 00:19:25.095 "seek_hole": true, 00:19:25.095 "seek_data": true, 00:19:25.095 "copy": false, 00:19:25.095 "nvme_iov_md": false 00:19:25.095 }, 00:19:25.096 "driver_specific": { 00:19:25.096 "lvol": { 00:19:25.096 "lvol_store_uuid": "922b0a90-50be-4185-8699-ef5f30654eb5", 00:19:25.096 "base_bdev": "nvme0n1", 00:19:25.096 "thin_provision": true, 00:19:25.096 "num_allocated_clusters": 0, 00:19:25.096 "snapshot": false, 00:19:25.096 "clone": false, 00:19:25.096 "esnap_clone": false 00:19:25.096 } 00:19:25.096 } 00:19:25.096 } 00:19:25.096 ]' 00:19:25.096 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:25.096 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:25.096 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:25.355 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:25.355 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:25.355 10:25:04 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:25.355 10:25:04 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:25.355 10:25:04 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0821be13-46a1-4f9b-b39a-98248387ccff -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:25.355 [2024-11-29 10:25:04.748341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.355 [2024-11-29 10:25:04.748378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:25.355 [2024-11-29 10:25:04.748389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:25.355 [2024-11-29 10:25:04.748406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.355 [2024-11-29 10:25:04.750318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.355 [2024-11-29 10:25:04.750349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:25.355 [2024-11-29 10:25:04.750357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.888 ms 00:19:25.355 [2024-11-29 10:25:04.750366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.355 [2024-11-29 10:25:04.750442] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:25.355 [2024-11-29 10:25:04.750627] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:25.355 [2024-11-29 10:25:04.750648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.355 [2024-11-29 10:25:04.750657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:25.355 [2024-11-29 10:25:04.750667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:19:25.355 [2024-11-29 10:25:04.750674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.355 [2024-11-29 10:25:04.750818] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID db1a25ec-2b75-4dc5-810a-a3dfa0443739 00:19:25.355 [2024-11-29 10:25:04.751845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.355 [2024-11-29 10:25:04.751878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:25.355 [2024-11-29 10:25:04.751888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:25.355 [2024-11-29 10:25:04.751896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.355 [2024-11-29 10:25:04.757155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.355 [2024-11-29 10:25:04.757180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:25.355 [2024-11-29 10:25:04.757189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.164 ms 00:19:25.355 [2024-11-29 10:25:04.757205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.355 [2024-11-29 10:25:04.757297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.355 [2024-11-29 10:25:04.757307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:25.355 [2024-11-29 10:25:04.757318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:25.355 [2024-11-29 10:25:04.757325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.355 [2024-11-29 10:25:04.757358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.355 [2024-11-29 10:25:04.757365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:25.355 [2024-11-29 10:25:04.757375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:25.355 [2024-11-29 10:25:04.757381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.355 [2024-11-29 10:25:04.757415] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:25.355 [2024-11-29 10:25:04.758730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.355 [2024-11-29 10:25:04.758755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:25.355 [2024-11-29 10:25:04.758765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.320 ms 00:19:25.355 [2024-11-29 10:25:04.758773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.355 [2024-11-29 10:25:04.758831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.355 [2024-11-29 10:25:04.758841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:25.355 [2024-11-29 10:25:04.758848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:25.355 [2024-11-29 10:25:04.758857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.355 [2024-11-29 10:25:04.758887] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:25.355 [2024-11-29 10:25:04.759009] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:25.355 [2024-11-29 10:25:04.759020] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:25.355 [2024-11-29 10:25:04.759030] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:25.355 [2024-11-29 10:25:04.759039] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:25.355 [2024-11-29 10:25:04.759048] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:25.355 [2024-11-29 10:25:04.759054] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:25.355 [2024-11-29 10:25:04.759063] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:25.355 [2024-11-29 10:25:04.759069] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:25.355 [2024-11-29 10:25:04.759078] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:25.355 [2024-11-29 10:25:04.759093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.355 [2024-11-29 10:25:04.759100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:25.355 [2024-11-29 10:25:04.759106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:19:25.355 [2024-11-29 10:25:04.759113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.355 [2024-11-29 10:25:04.759206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.355 [2024-11-29 10:25:04.759216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:25.355 [2024-11-29 10:25:04.759223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:25.356 [2024-11-29 10:25:04.759230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.356 [2024-11-29 10:25:04.759334] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:25.356 [2024-11-29 10:25:04.759350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:25.356 [2024-11-29 10:25:04.759363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:25.356 [2024-11-29 10:25:04.759371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.356 [2024-11-29 10:25:04.759378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:25.356 [2024-11-29 10:25:04.759387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:25.356 [2024-11-29 10:25:04.759393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:25.356 [2024-11-29 10:25:04.759401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:25.356 [2024-11-29 10:25:04.759408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:25.356 [2024-11-29 10:25:04.759415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:25.356 [2024-11-29 10:25:04.759421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:25.356 [2024-11-29 10:25:04.759429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:25.356 [2024-11-29 10:25:04.759435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:25.356 [2024-11-29 10:25:04.759445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:25.356 [2024-11-29 10:25:04.759451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:25.356 [2024-11-29 10:25:04.759459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.356 [2024-11-29 10:25:04.759465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:25.356 [2024-11-29 10:25:04.759472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:25.356 [2024-11-29 10:25:04.759478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.356 [2024-11-29 10:25:04.759486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:25.356 [2024-11-29 10:25:04.759492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:25.356 [2024-11-29 10:25:04.759510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.356 [2024-11-29 10:25:04.759518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:25.356 [2024-11-29 10:25:04.759525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:25.356 [2024-11-29 10:25:04.759530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.356 [2024-11-29 10:25:04.759537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:25.356 [2024-11-29 10:25:04.759544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:25.356 [2024-11-29 10:25:04.759551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.356 [2024-11-29 10:25:04.759558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:25.356 [2024-11-29 10:25:04.759567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:25.356 [2024-11-29 10:25:04.759574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.356 [2024-11-29 10:25:04.759581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:25.356 [2024-11-29 10:25:04.759587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:25.356 [2024-11-29 10:25:04.759595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:25.356 [2024-11-29 10:25:04.759601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:25.356 [2024-11-29 10:25:04.759609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:25.356 [2024-11-29 10:25:04.759615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:25.356 [2024-11-29 10:25:04.759623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:25.356 [2024-11-29 10:25:04.759630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:25.356 [2024-11-29 10:25:04.759636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.356 [2024-11-29 10:25:04.759642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:25.356 [2024-11-29 10:25:04.759649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:25.356 [2024-11-29 10:25:04.759655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.356 [2024-11-29 10:25:04.759663] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:25.356 [2024-11-29 10:25:04.759670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:25.356 [2024-11-29 10:25:04.759679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:25.356 [2024-11-29 10:25:04.759694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.356 [2024-11-29 10:25:04.759702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:25.356 [2024-11-29 10:25:04.759707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:25.356 [2024-11-29 10:25:04.759715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:25.356 [2024-11-29 10:25:04.759721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:25.356 [2024-11-29 10:25:04.759729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:25.356 [2024-11-29 10:25:04.759735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:25.356 [2024-11-29 10:25:04.759746] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:25.356 [2024-11-29 10:25:04.759754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:25.356 [2024-11-29 10:25:04.759762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:25.356 [2024-11-29 10:25:04.759768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:25.356 [2024-11-29 10:25:04.759776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:25.356 [2024-11-29 10:25:04.759781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:25.356 [2024-11-29 10:25:04.759788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:25.356 [2024-11-29 10:25:04.759793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:25.356 [2024-11-29 10:25:04.759811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:25.356 [2024-11-29 10:25:04.759818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:25.356 [2024-11-29 10:25:04.759824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:25.356 [2024-11-29 10:25:04.759831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:25.356 [2024-11-29 10:25:04.759838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:25.356 [2024-11-29 10:25:04.759843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:25.356 [2024-11-29 10:25:04.759851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:25.356 [2024-11-29 10:25:04.759857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:25.356 [2024-11-29 10:25:04.759864] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:25.356 [2024-11-29 10:25:04.759874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:25.356 [2024-11-29 10:25:04.759881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:25.357 [2024-11-29 10:25:04.759886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:25.357 [2024-11-29 10:25:04.759894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:25.357 [2024-11-29 10:25:04.759899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:25.357 [2024-11-29 10:25:04.759906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.357 [2024-11-29 10:25:04.759911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:25.357 [2024-11-29 10:25:04.759921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:19:25.357 [2024-11-29 10:25:04.759926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.357 [2024-11-29 10:25:04.760012] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:25.357 [2024-11-29 10:25:04.760020] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:27.885 [2024-11-29 10:25:07.150455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.885 [2024-11-29 10:25:07.150565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:27.885 [2024-11-29 10:25:07.150615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2390.417 ms 00:19:27.885 [2024-11-29 10:25:07.150640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.885 [2024-11-29 10:25:07.163948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.885 [2024-11-29 10:25:07.163989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.885 [2024-11-29 10:25:07.164003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.012 ms 00:19:27.885 [2024-11-29 10:25:07.164010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.885 [2024-11-29 10:25:07.164147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.885 [2024-11-29 10:25:07.164158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:27.885 [2024-11-29 10:25:07.164171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:27.885 [2024-11-29 10:25:07.164178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.885 [2024-11-29 10:25:07.182928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.885 [2024-11-29 10:25:07.182982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.885 [2024-11-29 10:25:07.182998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.710 ms 00:19:27.885 [2024-11-29 10:25:07.183007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.885 [2024-11-29 10:25:07.183096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.885 [2024-11-29 10:25:07.183111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.885 [2024-11-29 10:25:07.183123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:27.885 [2024-11-29 10:25:07.183131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.885 [2024-11-29 10:25:07.183478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.885 [2024-11-29 10:25:07.183505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.885 [2024-11-29 10:25:07.183518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:19:27.885 [2024-11-29 10:25:07.183526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.885 [2024-11-29 10:25:07.183680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.885 [2024-11-29 10:25:07.183697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.885 [2024-11-29 10:25:07.183712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:27.885 [2024-11-29 10:25:07.183732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.885 [2024-11-29 10:25:07.189608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.885 [2024-11-29 10:25:07.189641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.885 [2024-11-29 10:25:07.189654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.842 ms 00:19:27.885 [2024-11-29 10:25:07.189661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.885 [2024-11-29 10:25:07.198217] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:27.885 [2024-11-29 10:25:07.212739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.885 [2024-11-29 10:25:07.212774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:27.885 [2024-11-29 10:25:07.212785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.980 ms 00:19:27.885 [2024-11-29 10:25:07.212794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.885 [2024-11-29 10:25:07.272115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.885 [2024-11-29 10:25:07.272162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:27.885 [2024-11-29 10:25:07.272184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.246 ms 00:19:27.885 [2024-11-29 10:25:07.272200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.885 [2024-11-29 10:25:07.272703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.885 [2024-11-29 10:25:07.272748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:27.885 [2024-11-29 10:25:07.272759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:19:27.885 [2024-11-29 10:25:07.272769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.885 [2024-11-29 10:25:07.275933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.886 [2024-11-29 10:25:07.275978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:27.886 [2024-11-29 10:25:07.275997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.107 ms 00:19:27.886 [2024-11-29 10:25:07.276006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.886 [2024-11-29 10:25:07.278455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.886 [2024-11-29 10:25:07.278488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:27.886 [2024-11-29 10:25:07.278498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.348 ms 00:19:27.886 [2024-11-29 10:25:07.278507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.886 [2024-11-29 10:25:07.278832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.886 [2024-11-29 10:25:07.278857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:27.886 [2024-11-29 10:25:07.278874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:19:27.886 [2024-11-29 10:25:07.278885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.886 [2024-11-29 10:25:07.307613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.886 [2024-11-29 10:25:07.307648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:27.886 [2024-11-29 10:25:07.307659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.694 ms 00:19:27.886 [2024-11-29 10:25:07.307671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.886 [2024-11-29 10:25:07.311367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.886 [2024-11-29 10:25:07.311402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:27.886 [2024-11-29 10:25:07.311412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.627 ms 00:19:27.886 [2024-11-29 10:25:07.311421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.886 [2024-11-29 10:25:07.314420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.886 [2024-11-29 10:25:07.314452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:27.886 [2024-11-29 10:25:07.314462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.921 ms 00:19:27.886 [2024-11-29 10:25:07.314472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.886 [2024-11-29 10:25:07.317605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.886 [2024-11-29 10:25:07.317639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:27.886 [2024-11-29 10:25:07.317647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.088 ms 00:19:27.886 [2024-11-29 10:25:07.317658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.886 [2024-11-29 10:25:07.317703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.886 [2024-11-29 10:25:07.317716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:27.886 [2024-11-29 10:25:07.317724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:27.886 [2024-11-29 10:25:07.317734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.886 [2024-11-29 10:25:07.317838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.886 [2024-11-29 10:25:07.317855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:27.886 [2024-11-29 10:25:07.317863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:27.886 [2024-11-29 10:25:07.317872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.886 [2024-11-29 10:25:07.318992] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:27.886 [2024-11-29 10:25:07.319951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2570.330 ms, result 0 00:19:27.886 [2024-11-29 10:25:07.320581] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:27.886 { 00:19:27.886 "name": "ftl0", 00:19:27.886 "uuid": "db1a25ec-2b75-4dc5-810a-a3dfa0443739" 00:19:27.886 } 00:19:27.886 10:25:07 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:27.886 10:25:07 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:27.886 10:25:07 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:27.886 10:25:07 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:27.886 10:25:07 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:27.886 10:25:07 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:27.886 10:25:07 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:28.144 10:25:07 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:28.401 [ 00:19:28.401 { 00:19:28.401 "name": "ftl0", 00:19:28.401 "aliases": [ 00:19:28.401 "db1a25ec-2b75-4dc5-810a-a3dfa0443739" 00:19:28.401 ], 00:19:28.401 "product_name": "FTL disk", 00:19:28.401 "block_size": 4096, 00:19:28.401 "num_blocks": 23592960, 00:19:28.401 "uuid": "db1a25ec-2b75-4dc5-810a-a3dfa0443739", 00:19:28.401 "assigned_rate_limits": { 00:19:28.401 "rw_ios_per_sec": 0, 00:19:28.401 "rw_mbytes_per_sec": 0, 00:19:28.401 "r_mbytes_per_sec": 0, 00:19:28.401 "w_mbytes_per_sec": 0 00:19:28.401 }, 00:19:28.401 "claimed": false, 00:19:28.401 "zoned": false, 00:19:28.401 "supported_io_types": { 00:19:28.401 "read": true, 00:19:28.401 "write": true, 00:19:28.401 "unmap": true, 00:19:28.401 "flush": true, 00:19:28.401 "reset": false, 00:19:28.401 "nvme_admin": false, 00:19:28.401 "nvme_io": false, 00:19:28.401 "nvme_io_md": false, 00:19:28.401 "write_zeroes": true, 00:19:28.401 "zcopy": false, 00:19:28.401 "get_zone_info": false, 00:19:28.401 "zone_management": false, 00:19:28.401 "zone_append": false, 00:19:28.401 "compare": false, 00:19:28.401 "compare_and_write": false, 00:19:28.401 "abort": false, 00:19:28.401 "seek_hole": false, 00:19:28.401 "seek_data": false, 00:19:28.401 "copy": false, 00:19:28.401 "nvme_iov_md": false 00:19:28.401 }, 00:19:28.401 "driver_specific": { 00:19:28.401 "ftl": { 00:19:28.401 "base_bdev": "0821be13-46a1-4f9b-b39a-98248387ccff", 00:19:28.401 "cache": "nvc0n1p0" 00:19:28.401 } 00:19:28.401 } 00:19:28.401 } 00:19:28.401 ] 00:19:28.401 10:25:07 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:28.401 10:25:07 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:28.401 10:25:07 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:28.659 10:25:07 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:28.659 10:25:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:28.917 10:25:08 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:28.917 { 00:19:28.917 "name": "ftl0", 00:19:28.917 "aliases": [ 00:19:28.917 "db1a25ec-2b75-4dc5-810a-a3dfa0443739" 00:19:28.917 ], 00:19:28.917 "product_name": "FTL disk", 00:19:28.917 "block_size": 4096, 00:19:28.917 "num_blocks": 23592960, 00:19:28.917 "uuid": "db1a25ec-2b75-4dc5-810a-a3dfa0443739", 00:19:28.917 "assigned_rate_limits": { 00:19:28.917 "rw_ios_per_sec": 0, 00:19:28.917 "rw_mbytes_per_sec": 0, 00:19:28.917 "r_mbytes_per_sec": 0, 00:19:28.917 "w_mbytes_per_sec": 0 00:19:28.917 }, 00:19:28.917 "claimed": false, 00:19:28.917 "zoned": false, 00:19:28.917 "supported_io_types": { 00:19:28.917 "read": true, 00:19:28.917 "write": true, 00:19:28.917 "unmap": true, 00:19:28.917 "flush": true, 00:19:28.917 "reset": false, 00:19:28.917 "nvme_admin": false, 00:19:28.917 "nvme_io": false, 00:19:28.917 "nvme_io_md": false, 00:19:28.917 "write_zeroes": true, 00:19:28.917 "zcopy": false, 00:19:28.917 "get_zone_info": false, 00:19:28.917 "zone_management": false, 00:19:28.917 "zone_append": false, 00:19:28.917 "compare": false, 00:19:28.917 "compare_and_write": false, 00:19:28.917 "abort": false, 00:19:28.917 "seek_hole": false, 00:19:28.917 "seek_data": false, 00:19:28.917 "copy": false, 00:19:28.917 "nvme_iov_md": false 00:19:28.917 }, 00:19:28.917 "driver_specific": { 00:19:28.917 "ftl": { 00:19:28.917 "base_bdev": "0821be13-46a1-4f9b-b39a-98248387ccff", 00:19:28.917 "cache": "nvc0n1p0" 00:19:28.917 } 00:19:28.917 } 00:19:28.917 } 00:19:28.917 ]' 00:19:28.917 10:25:08 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:28.917 10:25:08 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:28.917 10:25:08 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:28.917 [2024-11-29 10:25:08.339248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.917 [2024-11-29 10:25:08.339280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.917 [2024-11-29 10:25:08.339294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:28.917 [2024-11-29 10:25:08.339303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.917 [2024-11-29 10:25:08.339342] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:28.917 [2024-11-29 10:25:08.339780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.917 [2024-11-29 10:25:08.339797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.917 [2024-11-29 10:25:08.339832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:19:28.917 [2024-11-29 10:25:08.339841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.917 [2024-11-29 10:25:08.340433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.917 [2024-11-29 10:25:08.340445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.917 [2024-11-29 10:25:08.340453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:19:28.917 [2024-11-29 10:25:08.340464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.917 [2024-11-29 10:25:08.344126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.917 [2024-11-29 10:25:08.344145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:28.917 [2024-11-29 10:25:08.344155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.617 ms 00:19:28.917 [2024-11-29 10:25:08.344164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.917 [2024-11-29 10:25:08.351172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.917 [2024-11-29 10:25:08.351201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:28.917 [2024-11-29 10:25:08.351211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.961 ms 00:19:28.917 [2024-11-29 10:25:08.351221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.917 [2024-11-29 10:25:08.352762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.917 [2024-11-29 10:25:08.352793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:28.917 [2024-11-29 10:25:08.352814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.436 ms 00:19:28.917 [2024-11-29 10:25:08.352823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.917 [2024-11-29 10:25:08.356881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.917 [2024-11-29 10:25:08.356910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.917 [2024-11-29 10:25:08.356919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.021 ms 00:19:28.917 [2024-11-29 10:25:08.356933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.917 [2024-11-29 10:25:08.357131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.917 [2024-11-29 10:25:08.357146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.917 [2024-11-29 10:25:08.357154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:19:28.918 [2024-11-29 10:25:08.357163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.918 [2024-11-29 10:25:08.358986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.918 [2024-11-29 10:25:08.359015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:28.918 [2024-11-29 10:25:08.359024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.786 ms 00:19:28.918 [2024-11-29 10:25:08.359034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.918 [2024-11-29 10:25:08.360399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.918 [2024-11-29 10:25:08.360427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:28.918 [2024-11-29 10:25:08.360436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.327 ms 00:19:28.918 [2024-11-29 10:25:08.360445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.918 [2024-11-29 10:25:08.361494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.918 [2024-11-29 10:25:08.361523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.918 [2024-11-29 10:25:08.361531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:19:28.918 [2024-11-29 10:25:08.361540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.918 [2024-11-29 10:25:08.362517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.918 [2024-11-29 10:25:08.362547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.918 [2024-11-29 10:25:08.362555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.885 ms 00:19:28.918 [2024-11-29 10:25:08.362564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.918 [2024-11-29 10:25:08.362600] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.918 [2024-11-29 10:25:08.362615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.362992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.918 [2024-11-29 10:25:08.363248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.919 [2024-11-29 10:25:08.363482] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.919 [2024-11-29 10:25:08.363489] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: db1a25ec-2b75-4dc5-810a-a3dfa0443739 00:19:28.919 [2024-11-29 10:25:08.363498] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:28.919 [2024-11-29 10:25:08.363507] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:28.919 [2024-11-29 10:25:08.363515] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:28.919 [2024-11-29 10:25:08.363523] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:28.919 [2024-11-29 10:25:08.363532] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.919 [2024-11-29 10:25:08.363549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.919 [2024-11-29 10:25:08.363558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.919 [2024-11-29 10:25:08.363564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.919 [2024-11-29 10:25:08.363572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.919 [2024-11-29 10:25:08.363579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.919 [2024-11-29 10:25:08.363588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.919 [2024-11-29 10:25:08.363596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:19:28.919 [2024-11-29 10:25:08.363606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.919 [2024-11-29 10:25:08.365109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.919 [2024-11-29 10:25:08.365131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.919 [2024-11-29 10:25:08.365140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.464 ms 00:19:28.919 [2024-11-29 10:25:08.365149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.919 [2024-11-29 10:25:08.365233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.919 [2024-11-29 10:25:08.365243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.919 [2024-11-29 10:25:08.365251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:28.919 [2024-11-29 10:25:08.365259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.919 [2024-11-29 10:25:08.370479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.919 [2024-11-29 10:25:08.370509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.919 [2024-11-29 10:25:08.370521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.919 [2024-11-29 10:25:08.370530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.919 [2024-11-29 10:25:08.370605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.919 [2024-11-29 10:25:08.370616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.919 [2024-11-29 10:25:08.370634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.919 [2024-11-29 10:25:08.370644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.919 [2024-11-29 10:25:08.370706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.919 [2024-11-29 10:25:08.370718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.919 [2024-11-29 10:25:08.370725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.919 [2024-11-29 10:25:08.370734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.919 [2024-11-29 10:25:08.370775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.919 [2024-11-29 10:25:08.370784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.919 [2024-11-29 10:25:08.370792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.919 [2024-11-29 10:25:08.370842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.919 [2024-11-29 10:25:08.380118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.919 [2024-11-29 10:25:08.380155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.919 [2024-11-29 10:25:08.380165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.919 [2024-11-29 10:25:08.380175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.177 [2024-11-29 10:25:08.387837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.177 [2024-11-29 10:25:08.387881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:29.177 [2024-11-29 10:25:08.387890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.177 [2024-11-29 10:25:08.387901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.177 [2024-11-29 10:25:08.387948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.177 [2024-11-29 10:25:08.387961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:29.177 [2024-11-29 10:25:08.387969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.177 [2024-11-29 10:25:08.387978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.177 [2024-11-29 10:25:08.388039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.177 [2024-11-29 10:25:08.388059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:29.177 [2024-11-29 10:25:08.388066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.177 [2024-11-29 10:25:08.388075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.177 [2024-11-29 10:25:08.388153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.177 [2024-11-29 10:25:08.388164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.177 [2024-11-29 10:25:08.388174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.177 [2024-11-29 10:25:08.388183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.177 [2024-11-29 10:25:08.388237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.177 [2024-11-29 10:25:08.388248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:29.177 [2024-11-29 10:25:08.388255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.177 [2024-11-29 10:25:08.388266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.177 [2024-11-29 10:25:08.388322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.177 [2024-11-29 10:25:08.388332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.177 [2024-11-29 10:25:08.388342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.177 [2024-11-29 10:25:08.388351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.177 [2024-11-29 10:25:08.388417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.177 [2024-11-29 10:25:08.388429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.177 [2024-11-29 10:25:08.388436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.177 [2024-11-29 10:25:08.388454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.177 [2024-11-29 10:25:08.388641] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.365 ms, result 0 00:19:29.177 true 00:19:29.177 10:25:08 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 87326 00:19:29.177 10:25:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87326 ']' 00:19:29.177 10:25:08 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87326 00:19:29.177 10:25:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:29.177 10:25:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.177 10:25:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87326 00:19:29.177 10:25:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:29.177 killing process with pid 87326 00:19:29.177 10:25:08 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:29.177 10:25:08 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87326' 00:19:29.177 10:25:08 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87326 00:19:29.177 10:25:08 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87326 00:19:34.439 10:25:13 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:34.439 65536+0 records in 00:19:34.439 65536+0 records out 00:19:34.439 268435456 bytes (268 MB, 256 MiB) copied, 0.800072 s, 336 MB/s 00:19:34.439 10:25:13 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:34.701 [2024-11-29 10:25:13.952270] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:34.701 [2024-11-29 10:25:13.952392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87476 ] 00:19:34.701 [2024-11-29 10:25:14.097335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.701 [2024-11-29 10:25:14.121241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.963 [2024-11-29 10:25:14.240191] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:34.963 [2024-11-29 10:25:14.240506] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:34.963 [2024-11-29 10:25:14.400572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-11-29 10:25:14.400632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:34.963 [2024-11-29 10:25:14.400648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:34.963 [2024-11-29 10:25:14.400657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-11-29 10:25:14.403329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-11-29 10:25:14.403383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:34.963 [2024-11-29 10:25:14.403394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.651 ms 00:19:34.963 [2024-11-29 10:25:14.403402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-11-29 10:25:14.403510] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:34.963 [2024-11-29 10:25:14.403773] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:34.963 [2024-11-29 10:25:14.403814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-11-29 10:25:14.403823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:34.963 [2024-11-29 10:25:14.403836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:19:34.963 [2024-11-29 10:25:14.403843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-11-29 10:25:14.405667] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:34.963 [2024-11-29 10:25:14.409375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-11-29 10:25:14.409421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:34.963 [2024-11-29 10:25:14.409442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.710 ms 00:19:34.963 [2024-11-29 10:25:14.409453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-11-29 10:25:14.409546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-11-29 10:25:14.409562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:34.963 [2024-11-29 10:25:14.409571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:34.963 [2024-11-29 10:25:14.409579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-11-29 10:25:14.415603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-11-29 10:25:14.415632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:34.963 [2024-11-29 10:25:14.415642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.979 ms 00:19:34.963 [2024-11-29 10:25:14.415649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-11-29 10:25:14.415762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-11-29 10:25:14.415776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:34.963 [2024-11-29 10:25:14.415784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:34.963 [2024-11-29 10:25:14.415793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-11-29 10:25:14.415834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-11-29 10:25:14.415842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:34.963 [2024-11-29 10:25:14.415855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:34.963 [2024-11-29 10:25:14.415863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-11-29 10:25:14.415885] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:34.963 [2024-11-29 10:25:14.417212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-11-29 10:25:14.417237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:34.963 [2024-11-29 10:25:14.417246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.331 ms 00:19:34.963 [2024-11-29 10:25:14.417260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-11-29 10:25:14.417294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-11-29 10:25:14.417306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:34.963 [2024-11-29 10:25:14.417314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:34.963 [2024-11-29 10:25:14.417321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-11-29 10:25:14.417340] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:34.963 [2024-11-29 10:25:14.417355] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:34.963 [2024-11-29 10:25:14.417392] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:34.963 [2024-11-29 10:25:14.417408] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:34.963 [2024-11-29 10:25:14.417509] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:34.963 [2024-11-29 10:25:14.417519] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:34.963 [2024-11-29 10:25:14.417530] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:34.963 [2024-11-29 10:25:14.417539] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:34.963 [2024-11-29 10:25:14.417548] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:34.963 [2024-11-29 10:25:14.417556] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:34.963 [2024-11-29 10:25:14.417563] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:34.963 [2024-11-29 10:25:14.417570] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:34.963 [2024-11-29 10:25:14.417582] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:34.963 [2024-11-29 10:25:14.417591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-11-29 10:25:14.417598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:34.963 [2024-11-29 10:25:14.417605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:19:34.963 [2024-11-29 10:25:14.417612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-11-29 10:25:14.417702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-11-29 10:25:14.417713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:34.963 [2024-11-29 10:25:14.417720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:34.963 [2024-11-29 10:25:14.417727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-11-29 10:25:14.417847] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:34.963 [2024-11-29 10:25:14.417862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:34.963 [2024-11-29 10:25:14.417870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:34.963 [2024-11-29 10:25:14.417881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.963 [2024-11-29 10:25:14.417888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:34.963 [2024-11-29 10:25:14.417894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:34.963 [2024-11-29 10:25:14.417901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:34.963 [2024-11-29 10:25:14.417910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:34.963 [2024-11-29 10:25:14.417917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:34.963 [2024-11-29 10:25:14.417924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:34.963 [2024-11-29 10:25:14.417931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:34.963 [2024-11-29 10:25:14.417937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:34.963 [2024-11-29 10:25:14.417944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:34.963 [2024-11-29 10:25:14.417951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:34.963 [2024-11-29 10:25:14.417957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:34.963 [2024-11-29 10:25:14.417963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.963 [2024-11-29 10:25:14.417970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:34.963 [2024-11-29 10:25:14.417976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:34.963 [2024-11-29 10:25:14.417983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.964 [2024-11-29 10:25:14.417993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:34.964 [2024-11-29 10:25:14.418000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:34.964 [2024-11-29 10:25:14.418008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.964 [2024-11-29 10:25:14.418014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:34.964 [2024-11-29 10:25:14.418025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:34.964 [2024-11-29 10:25:14.418031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.964 [2024-11-29 10:25:14.418038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:34.964 [2024-11-29 10:25:14.418044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:34.964 [2024-11-29 10:25:14.418051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.964 [2024-11-29 10:25:14.418057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:34.964 [2024-11-29 10:25:14.418063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:34.964 [2024-11-29 10:25:14.418070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.964 [2024-11-29 10:25:14.418077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:34.964 [2024-11-29 10:25:14.418083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:34.964 [2024-11-29 10:25:14.418089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:34.964 [2024-11-29 10:25:14.418096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:34.964 [2024-11-29 10:25:14.418102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:34.964 [2024-11-29 10:25:14.418117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:34.964 [2024-11-29 10:25:14.418123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:34.964 [2024-11-29 10:25:14.418130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:34.964 [2024-11-29 10:25:14.418138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.964 [2024-11-29 10:25:14.418145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:34.964 [2024-11-29 10:25:14.418151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:34.964 [2024-11-29 10:25:14.418158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.964 [2024-11-29 10:25:14.418164] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:34.964 [2024-11-29 10:25:14.418171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:34.964 [2024-11-29 10:25:14.418178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:34.964 [2024-11-29 10:25:14.418185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.964 [2024-11-29 10:25:14.418193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:34.964 [2024-11-29 10:25:14.418199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:34.964 [2024-11-29 10:25:14.418205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:34.964 [2024-11-29 10:25:14.418212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:34.964 [2024-11-29 10:25:14.418220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:34.964 [2024-11-29 10:25:14.418227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:34.964 [2024-11-29 10:25:14.418236] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:34.964 [2024-11-29 10:25:14.418244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:34.964 [2024-11-29 10:25:14.418255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:34.964 [2024-11-29 10:25:14.418262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:34.964 [2024-11-29 10:25:14.418269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:34.964 [2024-11-29 10:25:14.418276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:34.964 [2024-11-29 10:25:14.418283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:34.964 [2024-11-29 10:25:14.418290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:34.964 [2024-11-29 10:25:14.418297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:34.964 [2024-11-29 10:25:14.418308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:34.964 [2024-11-29 10:25:14.418315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:34.964 [2024-11-29 10:25:14.418323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:34.964 [2024-11-29 10:25:14.418330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:34.964 [2024-11-29 10:25:14.418338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:34.964 [2024-11-29 10:25:14.418345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:34.964 [2024-11-29 10:25:14.418352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:34.964 [2024-11-29 10:25:14.418358] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:34.964 [2024-11-29 10:25:14.418369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:34.964 [2024-11-29 10:25:14.418379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:34.964 [2024-11-29 10:25:14.418386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:34.964 [2024-11-29 10:25:14.418393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:34.964 [2024-11-29 10:25:14.418400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:34.964 [2024-11-29 10:25:14.418407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.964 [2024-11-29 10:25:14.418415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:34.964 [2024-11-29 10:25:14.418422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:19:34.964 [2024-11-29 10:25:14.418428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.227 [2024-11-29 10:25:14.427147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.227 [2024-11-29 10:25:14.427284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.227 [2024-11-29 10:25:14.427299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.670 ms 00:19:35.227 [2024-11-29 10:25:14.427308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.227 [2024-11-29 10:25:14.427426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.227 [2024-11-29 10:25:14.427441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:35.227 [2024-11-29 10:25:14.427454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:35.227 [2024-11-29 10:25:14.427461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.227 [2024-11-29 10:25:14.444469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.227 [2024-11-29 10:25:14.444515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.227 [2024-11-29 10:25:14.444529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.987 ms 00:19:35.227 [2024-11-29 10:25:14.444540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.227 [2024-11-29 10:25:14.444637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.227 [2024-11-29 10:25:14.444652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.227 [2024-11-29 10:25:14.444662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:35.227 [2024-11-29 10:25:14.444671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.227 [2024-11-29 10:25:14.445046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.227 [2024-11-29 10:25:14.445073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.227 [2024-11-29 10:25:14.445085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:19:35.227 [2024-11-29 10:25:14.445094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.227 [2024-11-29 10:25:14.445256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.227 [2024-11-29 10:25:14.445285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.227 [2024-11-29 10:25:14.445301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:19:35.227 [2024-11-29 10:25:14.445311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.227 [2024-11-29 10:25:14.451153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.227 [2024-11-29 10:25:14.451183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.227 [2024-11-29 10:25:14.451196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.816 ms 00:19:35.227 [2024-11-29 10:25:14.451204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.227 [2024-11-29 10:25:14.453743] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:35.227 [2024-11-29 10:25:14.453778] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:35.227 [2024-11-29 10:25:14.453794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.227 [2024-11-29 10:25:14.453818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:35.227 [2024-11-29 10:25:14.453826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.506 ms 00:19:35.227 [2024-11-29 10:25:14.453833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.227 [2024-11-29 10:25:14.468275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.227 [2024-11-29 10:25:14.468310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:35.227 [2024-11-29 10:25:14.468320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.401 ms 00:19:35.227 [2024-11-29 10:25:14.468328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.227 [2024-11-29 10:25:14.470410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.227 [2024-11-29 10:25:14.470440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:35.227 [2024-11-29 10:25:14.470448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.017 ms 00:19:35.227 [2024-11-29 10:25:14.470454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.227 [2024-11-29 10:25:14.472012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.227 [2024-11-29 10:25:14.472129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:35.227 [2024-11-29 10:25:14.472143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.515 ms 00:19:35.227 [2024-11-29 10:25:14.472150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.227 [2024-11-29 10:25:14.472455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.227 [2024-11-29 10:25:14.472467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:35.227 [2024-11-29 10:25:14.472476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:19:35.227 [2024-11-29 10:25:14.472483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.227 [2024-11-29 10:25:14.488437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.227 [2024-11-29 10:25:14.488478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:35.227 [2024-11-29 10:25:14.488490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.917 ms 00:19:35.227 [2024-11-29 10:25:14.488498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.227 [2024-11-29 10:25:14.495937] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:35.227 [2024-11-29 10:25:14.509971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.227 [2024-11-29 10:25:14.510005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:35.227 [2024-11-29 10:25:14.510023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.418 ms 00:19:35.227 [2024-11-29 10:25:14.510030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.228 [2024-11-29 10:25:14.510137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.228 [2024-11-29 10:25:14.510152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:35.228 [2024-11-29 10:25:14.510161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:35.228 [2024-11-29 10:25:14.510171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.228 [2024-11-29 10:25:14.510217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.228 [2024-11-29 10:25:14.510226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:35.228 [2024-11-29 10:25:14.510234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:35.228 [2024-11-29 10:25:14.510242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.228 [2024-11-29 10:25:14.510263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.228 [2024-11-29 10:25:14.510272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:35.228 [2024-11-29 10:25:14.510280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:35.228 [2024-11-29 10:25:14.510287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.228 [2024-11-29 10:25:14.510322] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:35.228 [2024-11-29 10:25:14.510331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.228 [2024-11-29 10:25:14.510339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:35.228 [2024-11-29 10:25:14.510346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:35.228 [2024-11-29 10:25:14.510353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.228 [2024-11-29 10:25:14.514459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.228 [2024-11-29 10:25:14.514576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:35.228 [2024-11-29 10:25:14.514591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.087 ms 00:19:35.228 [2024-11-29 10:25:14.514599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.228 [2024-11-29 10:25:14.514689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.228 [2024-11-29 10:25:14.514699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:35.228 [2024-11-29 10:25:14.514711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:35.228 [2024-11-29 10:25:14.514719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.228 [2024-11-29 10:25:14.515522] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:35.228 [2024-11-29 10:25:14.516523] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 114.676 ms, result 0 00:19:35.228 [2024-11-29 10:25:14.517706] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:35.228 [2024-11-29 10:25:14.527038] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:36.181  [2024-11-29T10:25:16.588Z] Copying: 14/256 [MB] (14 MBps) [2024-11-29T10:25:17.534Z] Copying: 29/256 [MB] (15 MBps) [2024-11-29T10:25:18.579Z] Copying: 44/256 [MB] (14 MBps) [2024-11-29T10:25:19.543Z] Copying: 64/256 [MB] (20 MBps) [2024-11-29T10:25:20.918Z] Copying: 76/256 [MB] (11 MBps) [2024-11-29T10:25:21.852Z] Copying: 109/256 [MB] (32 MBps) [2024-11-29T10:25:22.790Z] Copying: 137/256 [MB] (28 MBps) [2024-11-29T10:25:23.731Z] Copying: 172/256 [MB] (35 MBps) [2024-11-29T10:25:24.672Z] Copying: 197/256 [MB] (24 MBps) [2024-11-29T10:25:25.614Z] Copying: 211/256 [MB] (13 MBps) [2024-11-29T10:25:26.557Z] Copying: 227/256 [MB] (16 MBps) [2024-11-29T10:25:27.131Z] Copying: 246/256 [MB] (18 MBps) [2024-11-29T10:25:27.131Z] Copying: 256/256 [MB] (average 20 MBps)[2024-11-29 10:25:26.959177] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:47.666 [2024-11-29 10:25:26.960502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.666 [2024-11-29 10:25:26.960542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:47.666 [2024-11-29 10:25:26.960553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:47.666 [2024-11-29 10:25:26.960560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.666 [2024-11-29 10:25:26.960577] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:47.666 [2024-11-29 10:25:26.961111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.666 [2024-11-29 10:25:26.961144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:47.666 [2024-11-29 10:25:26.961152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:19:47.666 [2024-11-29 10:25:26.961158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.666 [2024-11-29 10:25:26.962886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.666 [2024-11-29 10:25:26.962914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:47.666 [2024-11-29 10:25:26.962922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.710 ms 00:19:47.666 [2024-11-29 10:25:26.962933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.666 [2024-11-29 10:25:26.969634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.666 [2024-11-29 10:25:26.969662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:47.666 [2024-11-29 10:25:26.969670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.688 ms 00:19:47.667 [2024-11-29 10:25:26.969677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-11-29 10:25:26.974895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.667 [2024-11-29 10:25:26.974926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:47.667 [2024-11-29 10:25:26.974934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.192 ms 00:19:47.667 [2024-11-29 10:25:26.974940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-11-29 10:25:26.976542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.667 [2024-11-29 10:25:26.976570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:47.667 [2024-11-29 10:25:26.976577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.558 ms 00:19:47.667 [2024-11-29 10:25:26.976583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-11-29 10:25:26.980376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.667 [2024-11-29 10:25:26.980409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:47.667 [2024-11-29 10:25:26.980421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.767 ms 00:19:47.667 [2024-11-29 10:25:26.980426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-11-29 10:25:26.980521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.667 [2024-11-29 10:25:26.980529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:47.667 [2024-11-29 10:25:26.980535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:47.667 [2024-11-29 10:25:26.980546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-11-29 10:25:26.982587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.667 [2024-11-29 10:25:26.982613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:47.667 [2024-11-29 10:25:26.982620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.025 ms 00:19:47.667 [2024-11-29 10:25:26.982626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-11-29 10:25:26.984121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.667 [2024-11-29 10:25:26.984146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:47.667 [2024-11-29 10:25:26.984152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.470 ms 00:19:47.667 [2024-11-29 10:25:26.984157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-11-29 10:25:26.985312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.667 [2024-11-29 10:25:26.985432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:47.667 [2024-11-29 10:25:26.985444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:19:47.667 [2024-11-29 10:25:26.985449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-11-29 10:25:26.986743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.667 [2024-11-29 10:25:26.986771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:47.667 [2024-11-29 10:25:26.986777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:19:47.667 [2024-11-29 10:25:26.986782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-11-29 10:25:26.986816] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:47.667 [2024-11-29 10:25:26.986828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.986996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:47.667 [2024-11-29 10:25:26.987191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:47.668 [2024-11-29 10:25:26.987421] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:47.668 [2024-11-29 10:25:26.987427] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: db1a25ec-2b75-4dc5-810a-a3dfa0443739 00:19:47.668 [2024-11-29 10:25:26.987433] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:47.668 [2024-11-29 10:25:26.987439] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:47.668 [2024-11-29 10:25:26.987445] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:47.668 [2024-11-29 10:25:26.987451] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:47.668 [2024-11-29 10:25:26.987456] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:47.668 [2024-11-29 10:25:26.987462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:47.668 [2024-11-29 10:25:26.987468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:47.668 [2024-11-29 10:25:26.987473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:47.668 [2024-11-29 10:25:26.987477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:47.668 [2024-11-29 10:25:26.987482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.668 [2024-11-29 10:25:26.987490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:47.668 [2024-11-29 10:25:26.987497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:19:47.668 [2024-11-29 10:25:26.987503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.668 [2024-11-29 10:25:26.989206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.668 [2024-11-29 10:25:26.989223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:47.668 [2024-11-29 10:25:26.989230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:19:47.668 [2024-11-29 10:25:26.989236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.668 [2024-11-29 10:25:26.989327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.668 [2024-11-29 10:25:26.989333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:47.668 [2024-11-29 10:25:26.989340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:47.668 [2024-11-29 10:25:26.989345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.668 [2024-11-29 10:25:26.995261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.668 [2024-11-29 10:25:26.995389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:47.668 [2024-11-29 10:25:26.995402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.668 [2024-11-29 10:25:26.995408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.668 [2024-11-29 10:25:26.995469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.668 [2024-11-29 10:25:26.995476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:47.668 [2024-11-29 10:25:26.995482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.668 [2024-11-29 10:25:26.995488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.668 [2024-11-29 10:25:26.995527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.668 [2024-11-29 10:25:26.995535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:47.668 [2024-11-29 10:25:26.995541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.668 [2024-11-29 10:25:26.995546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.668 [2024-11-29 10:25:26.995562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.668 [2024-11-29 10:25:26.995569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:47.668 [2024-11-29 10:25:26.995580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.668 [2024-11-29 10:25:26.995586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.668 [2024-11-29 10:25:27.006238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.668 [2024-11-29 10:25:27.006366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:47.668 [2024-11-29 10:25:27.006379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.668 [2024-11-29 10:25:27.006386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.668 [2024-11-29 10:25:27.014890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.668 [2024-11-29 10:25:27.014925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:47.668 [2024-11-29 10:25:27.014934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.668 [2024-11-29 10:25:27.014941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.668 [2024-11-29 10:25:27.015000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.668 [2024-11-29 10:25:27.015008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:47.668 [2024-11-29 10:25:27.015015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.668 [2024-11-29 10:25:27.015021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.668 [2024-11-29 10:25:27.015048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.668 [2024-11-29 10:25:27.015055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:47.668 [2024-11-29 10:25:27.015063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.668 [2024-11-29 10:25:27.015070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.668 [2024-11-29 10:25:27.015127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.668 [2024-11-29 10:25:27.015135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:47.668 [2024-11-29 10:25:27.015142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.668 [2024-11-29 10:25:27.015155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.668 [2024-11-29 10:25:27.015188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.668 [2024-11-29 10:25:27.015195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:47.668 [2024-11-29 10:25:27.015204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.668 [2024-11-29 10:25:27.015210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.668 [2024-11-29 10:25:27.015245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.669 [2024-11-29 10:25:27.015253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:47.669 [2024-11-29 10:25:27.015258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.669 [2024-11-29 10:25:27.015265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.669 [2024-11-29 10:25:27.015307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.669 [2024-11-29 10:25:27.015315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:47.669 [2024-11-29 10:25:27.015324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.669 [2024-11-29 10:25:27.015331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.669 [2024-11-29 10:25:27.015459] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.927 ms, result 0 00:19:47.931 00:19:47.931 00:19:47.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.931 10:25:27 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=87619 00:19:47.931 10:25:27 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 87619 00:19:47.931 10:25:27 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87619 ']' 00:19:47.931 10:25:27 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:47.931 10:25:27 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.931 10:25:27 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.931 10:25:27 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.931 10:25:27 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.931 10:25:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:47.931 [2024-11-29 10:25:27.340210] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:47.931 [2024-11-29 10:25:27.340351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87619 ] 00:19:48.193 [2024-11-29 10:25:27.488527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.193 [2024-11-29 10:25:27.519377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.786 10:25:28 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.786 10:25:28 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:48.786 10:25:28 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:49.047 [2024-11-29 10:25:28.380362] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:49.047 [2024-11-29 10:25:28.380430] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:49.310 [2024-11-29 10:25:28.557201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.310 [2024-11-29 10:25:28.557263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:49.310 [2024-11-29 10:25:28.557279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:49.310 [2024-11-29 10:25:28.557289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.310 [2024-11-29 10:25:28.559927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.310 [2024-11-29 10:25:28.559985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:49.310 [2024-11-29 10:25:28.559996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.617 ms 00:19:49.310 [2024-11-29 10:25:28.560005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.310 [2024-11-29 10:25:28.560136] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:49.310 [2024-11-29 10:25:28.560405] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:49.310 [2024-11-29 10:25:28.560423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.310 [2024-11-29 10:25:28.560435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:49.310 [2024-11-29 10:25:28.560446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:19:49.310 [2024-11-29 10:25:28.560455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.310 [2024-11-29 10:25:28.562354] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:49.310 [2024-11-29 10:25:28.566266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.311 [2024-11-29 10:25:28.566321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:49.311 [2024-11-29 10:25:28.566336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.909 ms 00:19:49.311 [2024-11-29 10:25:28.566345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.311 [2024-11-29 10:25:28.566430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.311 [2024-11-29 10:25:28.566445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:49.311 [2024-11-29 10:25:28.566459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:49.311 [2024-11-29 10:25:28.566467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.311 [2024-11-29 10:25:28.575019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.311 [2024-11-29 10:25:28.575063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:49.311 [2024-11-29 10:25:28.575077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.493 ms 00:19:49.311 [2024-11-29 10:25:28.575086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.311 [2024-11-29 10:25:28.575226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.311 [2024-11-29 10:25:28.575238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:49.311 [2024-11-29 10:25:28.575254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:49.311 [2024-11-29 10:25:28.575262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.311 [2024-11-29 10:25:28.575296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.311 [2024-11-29 10:25:28.575309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:49.311 [2024-11-29 10:25:28.575319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:49.311 [2024-11-29 10:25:28.575327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.311 [2024-11-29 10:25:28.575353] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:49.311 [2024-11-29 10:25:28.577480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.311 [2024-11-29 10:25:28.577524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:49.311 [2024-11-29 10:25:28.577538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.134 ms 00:19:49.311 [2024-11-29 10:25:28.577548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.311 [2024-11-29 10:25:28.577595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.311 [2024-11-29 10:25:28.577609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:49.311 [2024-11-29 10:25:28.577618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:49.311 [2024-11-29 10:25:28.577628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.311 [2024-11-29 10:25:28.577650] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:49.311 [2024-11-29 10:25:28.577674] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:49.311 [2024-11-29 10:25:28.577712] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:49.311 [2024-11-29 10:25:28.577734] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:49.311 [2024-11-29 10:25:28.577863] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:49.311 [2024-11-29 10:25:28.577886] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:49.311 [2024-11-29 10:25:28.577897] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:49.311 [2024-11-29 10:25:28.577910] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:49.311 [2024-11-29 10:25:28.577922] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:49.311 [2024-11-29 10:25:28.577960] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:49.311 [2024-11-29 10:25:28.577970] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:49.311 [2024-11-29 10:25:28.577982] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:49.311 [2024-11-29 10:25:28.577990] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:49.311 [2024-11-29 10:25:28.578003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.311 [2024-11-29 10:25:28.578012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:49.311 [2024-11-29 10:25:28.578022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:19:49.311 [2024-11-29 10:25:28.578030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.311 [2024-11-29 10:25:28.578139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.311 [2024-11-29 10:25:28.578157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:49.311 [2024-11-29 10:25:28.578174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:49.311 [2024-11-29 10:25:28.578186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.311 [2024-11-29 10:25:28.578295] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:49.311 [2024-11-29 10:25:28.578309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:49.311 [2024-11-29 10:25:28.578320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.311 [2024-11-29 10:25:28.578329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.311 [2024-11-29 10:25:28.578343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:49.311 [2024-11-29 10:25:28.578352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:49.311 [2024-11-29 10:25:28.578364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:49.311 [2024-11-29 10:25:28.578371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:49.311 [2024-11-29 10:25:28.578382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:49.311 [2024-11-29 10:25:28.578391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.311 [2024-11-29 10:25:28.578403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:49.311 [2024-11-29 10:25:28.578411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:49.311 [2024-11-29 10:25:28.578422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.311 [2024-11-29 10:25:28.578431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:49.311 [2024-11-29 10:25:28.578443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:49.311 [2024-11-29 10:25:28.578451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.311 [2024-11-29 10:25:28.578462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:49.311 [2024-11-29 10:25:28.578472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:49.311 [2024-11-29 10:25:28.578484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.311 [2024-11-29 10:25:28.578492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:49.311 [2024-11-29 10:25:28.578508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:49.311 [2024-11-29 10:25:28.578525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.311 [2024-11-29 10:25:28.578536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:49.311 [2024-11-29 10:25:28.578544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:49.311 [2024-11-29 10:25:28.578558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.311 [2024-11-29 10:25:28.578566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:49.311 [2024-11-29 10:25:28.578577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:49.311 [2024-11-29 10:25:28.578584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.311 [2024-11-29 10:25:28.578593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:49.311 [2024-11-29 10:25:28.578602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:49.311 [2024-11-29 10:25:28.578610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.311 [2024-11-29 10:25:28.578617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:49.311 [2024-11-29 10:25:28.578626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:49.311 [2024-11-29 10:25:28.578632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.311 [2024-11-29 10:25:28.578642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:49.311 [2024-11-29 10:25:28.578649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:49.311 [2024-11-29 10:25:28.578659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.311 [2024-11-29 10:25:28.578666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:49.311 [2024-11-29 10:25:28.578674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:49.311 [2024-11-29 10:25:28.578683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.311 [2024-11-29 10:25:28.578692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:49.311 [2024-11-29 10:25:28.578699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:49.311 [2024-11-29 10:25:28.578707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.311 [2024-11-29 10:25:28.578714] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:49.311 [2024-11-29 10:25:28.578725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:49.311 [2024-11-29 10:25:28.578734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.311 [2024-11-29 10:25:28.578743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.311 [2024-11-29 10:25:28.578751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:49.311 [2024-11-29 10:25:28.578762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:49.311 [2024-11-29 10:25:28.578770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:49.311 [2024-11-29 10:25:28.578779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:49.311 [2024-11-29 10:25:28.578785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:49.311 [2024-11-29 10:25:28.578798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:49.311 [2024-11-29 10:25:28.578822] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:49.311 [2024-11-29 10:25:28.578836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.312 [2024-11-29 10:25:28.578850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:49.312 [2024-11-29 10:25:28.578861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:49.312 [2024-11-29 10:25:28.578869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:49.312 [2024-11-29 10:25:28.578878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:49.312 [2024-11-29 10:25:28.578886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:49.312 [2024-11-29 10:25:28.578897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:49.312 [2024-11-29 10:25:28.578905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:49.312 [2024-11-29 10:25:28.578914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:49.312 [2024-11-29 10:25:28.578923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:49.312 [2024-11-29 10:25:28.578934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:49.312 [2024-11-29 10:25:28.578942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:49.312 [2024-11-29 10:25:28.578957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:49.312 [2024-11-29 10:25:28.578964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:49.312 [2024-11-29 10:25:28.578978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:49.312 [2024-11-29 10:25:28.578985] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:49.312 [2024-11-29 10:25:28.578996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.312 [2024-11-29 10:25:28.579004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:49.312 [2024-11-29 10:25:28.579015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:49.312 [2024-11-29 10:25:28.579022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:49.312 [2024-11-29 10:25:28.579033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:49.312 [2024-11-29 10:25:28.579041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.579052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:49.312 [2024-11-29 10:25:28.579060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:19:49.312 [2024-11-29 10:25:28.579069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.593677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.593958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:49.312 [2024-11-29 10:25:28.593981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.528 ms 00:19:49.312 [2024-11-29 10:25:28.593992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.594146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.594168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:49.312 [2024-11-29 10:25:28.594179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:49.312 [2024-11-29 10:25:28.594190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.607380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.607435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:49.312 [2024-11-29 10:25:28.607451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.164 ms 00:19:49.312 [2024-11-29 10:25:28.607466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.607540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.607553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:49.312 [2024-11-29 10:25:28.607562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:49.312 [2024-11-29 10:25:28.607572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.608167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.608214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:49.312 [2024-11-29 10:25:28.608227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:19:49.312 [2024-11-29 10:25:28.608239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.608405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.608422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:49.312 [2024-11-29 10:25:28.608431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:19:49.312 [2024-11-29 10:25:28.608445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.618301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.618351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:49.312 [2024-11-29 10:25:28.618362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.831 ms 00:19:49.312 [2024-11-29 10:25:28.618378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.640943] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:49.312 [2024-11-29 10:25:28.641027] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:49.312 [2024-11-29 10:25:28.641045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.641058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:49.312 [2024-11-29 10:25:28.641071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.532 ms 00:19:49.312 [2024-11-29 10:25:28.641083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.657713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.657752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:49.312 [2024-11-29 10:25:28.657763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.556 ms 00:19:49.312 [2024-11-29 10:25:28.657775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.660431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.660585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:49.312 [2024-11-29 10:25:28.660602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.544 ms 00:19:49.312 [2024-11-29 10:25:28.660612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.662485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.662522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:49.312 [2024-11-29 10:25:28.662532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.830 ms 00:19:49.312 [2024-11-29 10:25:28.662541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.662884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.662899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:49.312 [2024-11-29 10:25:28.662907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:19:49.312 [2024-11-29 10:25:28.662916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.679074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.679231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:49.312 [2024-11-29 10:25:28.679250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.137 ms 00:19:49.312 [2024-11-29 10:25:28.679262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.686907] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:49.312 [2024-11-29 10:25:28.701441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.701476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:49.312 [2024-11-29 10:25:28.701491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.097 ms 00:19:49.312 [2024-11-29 10:25:28.701499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.701576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.701589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:49.312 [2024-11-29 10:25:28.701600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:49.312 [2024-11-29 10:25:28.701608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.701658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.701668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:49.312 [2024-11-29 10:25:28.701678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:49.312 [2024-11-29 10:25:28.701685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.701710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.701718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:49.312 [2024-11-29 10:25:28.701734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:49.312 [2024-11-29 10:25:28.701741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.701774] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:49.312 [2024-11-29 10:25:28.701785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.701794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:49.312 [2024-11-29 10:25:28.701819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:49.312 [2024-11-29 10:25:28.701829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.312 [2024-11-29 10:25:28.706617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.312 [2024-11-29 10:25:28.706785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:49.312 [2024-11-29 10:25:28.706815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.763 ms 00:19:49.313 [2024-11-29 10:25:28.706829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.313 [2024-11-29 10:25:28.706906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.313 [2024-11-29 10:25:28.706918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:49.313 [2024-11-29 10:25:28.706928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:49.313 [2024-11-29 10:25:28.706937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.313 [2024-11-29 10:25:28.707756] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:49.313 [2024-11-29 10:25:28.708841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 150.285 ms, result 0 00:19:49.313 [2024-11-29 10:25:28.710551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:49.313 Some configs were skipped because the RPC state that can call them passed over. 00:19:49.313 10:25:28 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:49.574 [2024-11-29 10:25:28.941274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.574 [2024-11-29 10:25:28.941464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:49.574 [2024-11-29 10:25:28.941535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.209 ms 00:19:49.574 [2024-11-29 10:25:28.941562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.574 [2024-11-29 10:25:28.941627] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.572 ms, result 0 00:19:49.574 true 00:19:49.574 10:25:28 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:49.835 [2024-11-29 10:25:29.164974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.835 [2024-11-29 10:25:29.165168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:49.835 [2024-11-29 10:25:29.165231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.649 ms 00:19:49.835 [2024-11-29 10:25:29.165258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.835 [2024-11-29 10:25:29.165318] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.989 ms, result 0 00:19:49.835 true 00:19:49.835 10:25:29 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 87619 00:19:49.835 10:25:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87619 ']' 00:19:49.835 10:25:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87619 00:19:49.835 10:25:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:49.835 10:25:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.835 10:25:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87619 00:19:49.835 10:25:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:49.835 10:25:29 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:49.835 killing process with pid 87619 00:19:49.835 10:25:29 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87619' 00:19:49.835 10:25:29 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87619 00:19:49.835 10:25:29 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87619 00:19:50.098 [2024-11-29 10:25:29.337195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.098 [2024-11-29 10:25:29.337243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:50.098 [2024-11-29 10:25:29.337258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:50.098 [2024-11-29 10:25:29.337267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.098 [2024-11-29 10:25:29.337298] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:50.098 [2024-11-29 10:25:29.337772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.099 [2024-11-29 10:25:29.337826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:50.099 [2024-11-29 10:25:29.337838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:19:50.099 [2024-11-29 10:25:29.337851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.099 [2024-11-29 10:25:29.338188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.099 [2024-11-29 10:25:29.338228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:50.099 [2024-11-29 10:25:29.338239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:19:50.099 [2024-11-29 10:25:29.338258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.099 [2024-11-29 10:25:29.342976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.099 [2024-11-29 10:25:29.343013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:50.099 [2024-11-29 10:25:29.343023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.683 ms 00:19:50.099 [2024-11-29 10:25:29.343035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.099 [2024-11-29 10:25:29.349977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.099 [2024-11-29 10:25:29.350026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:50.099 [2024-11-29 10:25:29.350037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.906 ms 00:19:50.099 [2024-11-29 10:25:29.350048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.099 [2024-11-29 10:25:29.352246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.099 [2024-11-29 10:25:29.352417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:50.099 [2024-11-29 10:25:29.352433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.112 ms 00:19:50.099 [2024-11-29 10:25:29.352442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.099 [2024-11-29 10:25:29.356981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.099 [2024-11-29 10:25:29.357026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:50.099 [2024-11-29 10:25:29.357039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.500 ms 00:19:50.099 [2024-11-29 10:25:29.357049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.099 [2024-11-29 10:25:29.357178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.099 [2024-11-29 10:25:29.357198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:50.099 [2024-11-29 10:25:29.357207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:50.099 [2024-11-29 10:25:29.357216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.099 [2024-11-29 10:25:29.360087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.099 [2024-11-29 10:25:29.360128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:50.099 [2024-11-29 10:25:29.360138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.852 ms 00:19:50.099 [2024-11-29 10:25:29.360149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.099 [2024-11-29 10:25:29.362593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.099 [2024-11-29 10:25:29.362635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:50.099 [2024-11-29 10:25:29.362644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.392 ms 00:19:50.099 [2024-11-29 10:25:29.362653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.099 [2024-11-29 10:25:29.364638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.099 [2024-11-29 10:25:29.364681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:50.099 [2024-11-29 10:25:29.364690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.946 ms 00:19:50.099 [2024-11-29 10:25:29.364699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.099 [2024-11-29 10:25:29.366596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.099 [2024-11-29 10:25:29.366741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:50.099 [2024-11-29 10:25:29.366756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.834 ms 00:19:50.099 [2024-11-29 10:25:29.366765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.099 [2024-11-29 10:25:29.366810] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:50.099 [2024-11-29 10:25:29.366828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.366996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:50.099 [2024-11-29 10:25:29.367310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:50.100 [2024-11-29 10:25:29.367717] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:50.100 [2024-11-29 10:25:29.367726] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: db1a25ec-2b75-4dc5-810a-a3dfa0443739 00:19:50.100 [2024-11-29 10:25:29.367738] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:50.100 [2024-11-29 10:25:29.367745] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:50.100 [2024-11-29 10:25:29.367753] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:50.100 [2024-11-29 10:25:29.367762] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:50.100 [2024-11-29 10:25:29.367770] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:50.100 [2024-11-29 10:25:29.367782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:50.100 [2024-11-29 10:25:29.367791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:50.100 [2024-11-29 10:25:29.367815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:50.100 [2024-11-29 10:25:29.367824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:50.100 [2024-11-29 10:25:29.367831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.100 [2024-11-29 10:25:29.367841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:50.100 [2024-11-29 10:25:29.367849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:19:50.100 [2024-11-29 10:25:29.367860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.100 [2024-11-29 10:25:29.369556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.100 [2024-11-29 10:25:29.369583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:50.100 [2024-11-29 10:25:29.369592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:19:50.100 [2024-11-29 10:25:29.369601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.100 [2024-11-29 10:25:29.369692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.100 [2024-11-29 10:25:29.369703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:50.100 [2024-11-29 10:25:29.369711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:50.100 [2024-11-29 10:25:29.369720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.100 [2024-11-29 10:25:29.375933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.100 [2024-11-29 10:25:29.376061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.100 [2024-11-29 10:25:29.376114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.100 [2024-11-29 10:25:29.376139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.100 [2024-11-29 10:25:29.376240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.100 [2024-11-29 10:25:29.376269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.100 [2024-11-29 10:25:29.376279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.100 [2024-11-29 10:25:29.376290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.100 [2024-11-29 10:25:29.376338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.100 [2024-11-29 10:25:29.376351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.100 [2024-11-29 10:25:29.376360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.100 [2024-11-29 10:25:29.376369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.100 [2024-11-29 10:25:29.376390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.100 [2024-11-29 10:25:29.376401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.100 [2024-11-29 10:25:29.376408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.100 [2024-11-29 10:25:29.376417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.100 [2024-11-29 10:25:29.387382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.100 [2024-11-29 10:25:29.387433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.100 [2024-11-29 10:25:29.387444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.100 [2024-11-29 10:25:29.387460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.100 [2024-11-29 10:25:29.395825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.100 [2024-11-29 10:25:29.395868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.100 [2024-11-29 10:25:29.395878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.100 [2024-11-29 10:25:29.395890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.100 [2024-11-29 10:25:29.395938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.100 [2024-11-29 10:25:29.395950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.100 [2024-11-29 10:25:29.395962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.100 [2024-11-29 10:25:29.395972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.100 [2024-11-29 10:25:29.396004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.100 [2024-11-29 10:25:29.396015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.100 [2024-11-29 10:25:29.396023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.100 [2024-11-29 10:25:29.396032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.100 [2024-11-29 10:25:29.396104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.101 [2024-11-29 10:25:29.396118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.101 [2024-11-29 10:25:29.396127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.101 [2024-11-29 10:25:29.396136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.101 [2024-11-29 10:25:29.396166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.101 [2024-11-29 10:25:29.396178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:50.101 [2024-11-29 10:25:29.396185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.101 [2024-11-29 10:25:29.396197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.101 [2024-11-29 10:25:29.396235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.101 [2024-11-29 10:25:29.396250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.101 [2024-11-29 10:25:29.396258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.101 [2024-11-29 10:25:29.396267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.101 [2024-11-29 10:25:29.396310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.101 [2024-11-29 10:25:29.396323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.101 [2024-11-29 10:25:29.396331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.101 [2024-11-29 10:25:29.396340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.101 [2024-11-29 10:25:29.396481] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.256 ms, result 0 00:19:50.362 10:25:29 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:50.362 10:25:29 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:50.362 [2024-11-29 10:25:29.653771] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:50.362 [2024-11-29 10:25:29.653928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87660 ] 00:19:50.362 [2024-11-29 10:25:29.798203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.624 [2024-11-29 10:25:29.829755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.624 [2024-11-29 10:25:29.947046] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:50.624 [2024-11-29 10:25:29.947126] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:50.887 [2024-11-29 10:25:30.108341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.887 [2024-11-29 10:25:30.108403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:50.887 [2024-11-29 10:25:30.108419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:50.887 [2024-11-29 10:25:30.108428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.887 [2024-11-29 10:25:30.111047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.887 [2024-11-29 10:25:30.111253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.887 [2024-11-29 10:25:30.111274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.597 ms 00:19:50.887 [2024-11-29 10:25:30.111284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.887 [2024-11-29 10:25:30.111411] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:50.887 [2024-11-29 10:25:30.111675] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:50.887 [2024-11-29 10:25:30.111700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.887 [2024-11-29 10:25:30.111709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.887 [2024-11-29 10:25:30.111720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:19:50.887 [2024-11-29 10:25:30.111728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.887 [2024-11-29 10:25:30.113718] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:50.887 [2024-11-29 10:25:30.117587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.887 [2024-11-29 10:25:30.117641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:50.887 [2024-11-29 10:25:30.117658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.871 ms 00:19:50.887 [2024-11-29 10:25:30.117667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.887 [2024-11-29 10:25:30.117750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.887 [2024-11-29 10:25:30.117767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:50.887 [2024-11-29 10:25:30.117776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:50.887 [2024-11-29 10:25:30.117785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.887 [2024-11-29 10:25:30.126376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.887 [2024-11-29 10:25:30.126420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.887 [2024-11-29 10:25:30.126431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.523 ms 00:19:50.887 [2024-11-29 10:25:30.126449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.887 [2024-11-29 10:25:30.126593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.887 [2024-11-29 10:25:30.126606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.887 [2024-11-29 10:25:30.126617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:50.887 [2024-11-29 10:25:30.126628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.887 [2024-11-29 10:25:30.126656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.887 [2024-11-29 10:25:30.126665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:50.887 [2024-11-29 10:25:30.126674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:50.887 [2024-11-29 10:25:30.126686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.887 [2024-11-29 10:25:30.126714] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:50.887 [2024-11-29 10:25:30.129011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.887 [2024-11-29 10:25:30.129167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.887 [2024-11-29 10:25:30.129226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.302 ms 00:19:50.887 [2024-11-29 10:25:30.129257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.887 [2024-11-29 10:25:30.129322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.887 [2024-11-29 10:25:30.129348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:50.887 [2024-11-29 10:25:30.129370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:50.887 [2024-11-29 10:25:30.129389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.887 [2024-11-29 10:25:30.129425] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:50.887 [2024-11-29 10:25:30.129566] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:50.887 [2024-11-29 10:25:30.129615] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:50.887 [2024-11-29 10:25:30.129638] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:50.887 [2024-11-29 10:25:30.129745] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:50.887 [2024-11-29 10:25:30.129758] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:50.887 [2024-11-29 10:25:30.129770] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:50.887 [2024-11-29 10:25:30.129782] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:50.887 [2024-11-29 10:25:30.129793] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:50.887 [2024-11-29 10:25:30.129825] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:50.887 [2024-11-29 10:25:30.129834] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:50.887 [2024-11-29 10:25:30.129842] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:50.887 [2024-11-29 10:25:30.129853] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:50.887 [2024-11-29 10:25:30.129867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.887 [2024-11-29 10:25:30.129877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:50.887 [2024-11-29 10:25:30.129886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:19:50.887 [2024-11-29 10:25:30.129894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.887 [2024-11-29 10:25:30.129989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.887 [2024-11-29 10:25:30.130000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:50.887 [2024-11-29 10:25:30.130009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:50.887 [2024-11-29 10:25:30.130018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.887 [2024-11-29 10:25:30.130145] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:50.887 [2024-11-29 10:25:30.130163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:50.887 [2024-11-29 10:25:30.130177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:50.887 [2024-11-29 10:25:30.130187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.887 [2024-11-29 10:25:30.130197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:50.887 [2024-11-29 10:25:30.130205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:50.887 [2024-11-29 10:25:30.130213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:50.887 [2024-11-29 10:25:30.130225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:50.887 [2024-11-29 10:25:30.130234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:50.887 [2024-11-29 10:25:30.130242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:50.887 [2024-11-29 10:25:30.130250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:50.887 [2024-11-29 10:25:30.130257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:50.887 [2024-11-29 10:25:30.130265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:50.887 [2024-11-29 10:25:30.130274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:50.887 [2024-11-29 10:25:30.130284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:50.887 [2024-11-29 10:25:30.130292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.887 [2024-11-29 10:25:30.130300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:50.887 [2024-11-29 10:25:30.130307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:50.888 [2024-11-29 10:25:30.130314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.888 [2024-11-29 10:25:30.130323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:50.888 [2024-11-29 10:25:30.130334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:50.888 [2024-11-29 10:25:30.130344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.888 [2024-11-29 10:25:30.130352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:50.888 [2024-11-29 10:25:30.130365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:50.888 [2024-11-29 10:25:30.130372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.888 [2024-11-29 10:25:30.130380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:50.888 [2024-11-29 10:25:30.130390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:50.888 [2024-11-29 10:25:30.130399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.888 [2024-11-29 10:25:30.130406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:50.888 [2024-11-29 10:25:30.130414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:50.888 [2024-11-29 10:25:30.130422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.888 [2024-11-29 10:25:30.130429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:50.888 [2024-11-29 10:25:30.130436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:50.888 [2024-11-29 10:25:30.130444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:50.888 [2024-11-29 10:25:30.130454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:50.888 [2024-11-29 10:25:30.130463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:50.888 [2024-11-29 10:25:30.130470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:50.888 [2024-11-29 10:25:30.130477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:50.888 [2024-11-29 10:25:30.130484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:50.888 [2024-11-29 10:25:30.130493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.888 [2024-11-29 10:25:30.130501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:50.888 [2024-11-29 10:25:30.130508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:50.888 [2024-11-29 10:25:30.130515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.888 [2024-11-29 10:25:30.130521] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:50.888 [2024-11-29 10:25:30.130529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:50.888 [2024-11-29 10:25:30.130536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:50.888 [2024-11-29 10:25:30.130543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.888 [2024-11-29 10:25:30.130552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:50.888 [2024-11-29 10:25:30.130559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:50.888 [2024-11-29 10:25:30.130566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:50.888 [2024-11-29 10:25:30.130574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:50.888 [2024-11-29 10:25:30.130581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:50.888 [2024-11-29 10:25:30.130587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:50.888 [2024-11-29 10:25:30.130597] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:50.888 [2024-11-29 10:25:30.130609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:50.888 [2024-11-29 10:25:30.130622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:50.888 [2024-11-29 10:25:30.130630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:50.888 [2024-11-29 10:25:30.130637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:50.888 [2024-11-29 10:25:30.130646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:50.888 [2024-11-29 10:25:30.130654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:50.888 [2024-11-29 10:25:30.130661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:50.888 [2024-11-29 10:25:30.130667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:50.888 [2024-11-29 10:25:30.130680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:50.888 [2024-11-29 10:25:30.130687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:50.888 [2024-11-29 10:25:30.130697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:50.888 [2024-11-29 10:25:30.130704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:50.888 [2024-11-29 10:25:30.130712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:50.888 [2024-11-29 10:25:30.130719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:50.888 [2024-11-29 10:25:30.130726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:50.888 [2024-11-29 10:25:30.130734] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:50.888 [2024-11-29 10:25:30.130749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:50.888 [2024-11-29 10:25:30.130760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:50.888 [2024-11-29 10:25:30.130767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:50.888 [2024-11-29 10:25:30.130775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:50.888 [2024-11-29 10:25:30.130781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:50.888 [2024-11-29 10:25:30.130789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.888 [2024-11-29 10:25:30.131048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:50.888 [2024-11-29 10:25:30.131091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:19:50.888 [2024-11-29 10:25:30.131111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.888 [2024-11-29 10:25:30.145844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.888 [2024-11-29 10:25:30.146021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.888 [2024-11-29 10:25:30.146078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.629 ms 00:19:50.888 [2024-11-29 10:25:30.146102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.888 [2024-11-29 10:25:30.146268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.888 [2024-11-29 10:25:30.146305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:50.888 [2024-11-29 10:25:30.146329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:50.888 [2024-11-29 10:25:30.146407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.888 [2024-11-29 10:25:30.166764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.888 [2024-11-29 10:25:30.167023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.888 [2024-11-29 10:25:30.167116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.311 ms 00:19:50.888 [2024-11-29 10:25:30.167145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.888 [2024-11-29 10:25:30.167276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.888 [2024-11-29 10:25:30.167312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.888 [2024-11-29 10:25:30.167338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:50.888 [2024-11-29 10:25:30.167445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.888 [2024-11-29 10:25:30.168042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.888 [2024-11-29 10:25:30.168119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.888 [2024-11-29 10:25:30.168219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:19:50.888 [2024-11-29 10:25:30.168247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.888 [2024-11-29 10:25:30.168443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.888 [2024-11-29 10:25:30.168490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.888 [2024-11-29 10:25:30.168563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:19:50.888 [2024-11-29 10:25:30.168576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.888 [2024-11-29 10:25:30.177536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.888 [2024-11-29 10:25:30.177590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.888 [2024-11-29 10:25:30.177609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.922 ms 00:19:50.888 [2024-11-29 10:25:30.177624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.888 [2024-11-29 10:25:30.181846] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:50.888 [2024-11-29 10:25:30.181895] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:50.888 [2024-11-29 10:25:30.181908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.888 [2024-11-29 10:25:30.181917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:50.888 [2024-11-29 10:25:30.181927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.141 ms 00:19:50.888 [2024-11-29 10:25:30.181935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.888 [2024-11-29 10:25:30.198028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.888 [2024-11-29 10:25:30.198083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:50.888 [2024-11-29 10:25:30.198098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.009 ms 00:19:50.888 [2024-11-29 10:25:30.198107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.888 [2024-11-29 10:25:30.201193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.888 [2024-11-29 10:25:30.201385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:50.889 [2024-11-29 10:25:30.201403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.973 ms 00:19:50.889 [2024-11-29 10:25:30.201412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.889 [2024-11-29 10:25:30.204445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.889 [2024-11-29 10:25:30.204611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:50.889 [2024-11-29 10:25:30.204628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:19:50.889 [2024-11-29 10:25:30.204636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.889 [2024-11-29 10:25:30.205025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.889 [2024-11-29 10:25:30.205044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:50.889 [2024-11-29 10:25:30.205056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:19:50.889 [2024-11-29 10:25:30.205064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.889 [2024-11-29 10:25:30.230148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.889 [2024-11-29 10:25:30.230216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:50.889 [2024-11-29 10:25:30.230230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.056 ms 00:19:50.889 [2024-11-29 10:25:30.230239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.889 [2024-11-29 10:25:30.238419] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:50.889 [2024-11-29 10:25:30.258362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.889 [2024-11-29 10:25:30.258633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:50.889 [2024-11-29 10:25:30.258654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.020 ms 00:19:50.889 [2024-11-29 10:25:30.258663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.889 [2024-11-29 10:25:30.258766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.889 [2024-11-29 10:25:30.258778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:50.889 [2024-11-29 10:25:30.258791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:50.889 [2024-11-29 10:25:30.258841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.889 [2024-11-29 10:25:30.258905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.889 [2024-11-29 10:25:30.258916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:50.889 [2024-11-29 10:25:30.258926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:50.889 [2024-11-29 10:25:30.258934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.889 [2024-11-29 10:25:30.258961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.889 [2024-11-29 10:25:30.258971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:50.889 [2024-11-29 10:25:30.258980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:50.889 [2024-11-29 10:25:30.258991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.889 [2024-11-29 10:25:30.259038] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:50.889 [2024-11-29 10:25:30.259055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.889 [2024-11-29 10:25:30.259063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:50.889 [2024-11-29 10:25:30.259076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:50.889 [2024-11-29 10:25:30.259087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.889 [2024-11-29 10:25:30.265758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.889 [2024-11-29 10:25:30.265981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:50.889 [2024-11-29 10:25:30.266001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.647 ms 00:19:50.889 [2024-11-29 10:25:30.266019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.889 [2024-11-29 10:25:30.266132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.889 [2024-11-29 10:25:30.266144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:50.889 [2024-11-29 10:25:30.266153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:50.889 [2024-11-29 10:25:30.266162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.889 [2024-11-29 10:25:30.267257] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:50.889 [2024-11-29 10:25:30.268737] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.592 ms, result 0 00:19:50.889 [2024-11-29 10:25:30.270053] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.889 [2024-11-29 10:25:30.277448] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:51.835  [2024-11-29T10:25:32.692Z] Copying: 20/256 [MB] (20 MBps) [2024-11-29T10:25:33.639Z] Copying: 37/256 [MB] (16 MBps) [2024-11-29T10:25:34.586Z] Copying: 54/256 [MB] (16 MBps) [2024-11-29T10:25:35.532Z] Copying: 74/256 [MB] (19 MBps) [2024-11-29T10:25:36.478Z] Copying: 88/256 [MB] (14 MBps) [2024-11-29T10:25:37.423Z] Copying: 99/256 [MB] (11 MBps) [2024-11-29T10:25:38.372Z] Copying: 110/256 [MB] (11 MBps) [2024-11-29T10:25:39.318Z] Copying: 121/256 [MB] (10 MBps) [2024-11-29T10:25:40.708Z] Copying: 132/256 [MB] (11 MBps) [2024-11-29T10:25:41.281Z] Copying: 143/256 [MB] (10 MBps) [2024-11-29T10:25:42.672Z] Copying: 154/256 [MB] (10 MBps) [2024-11-29T10:25:43.616Z] Copying: 164/256 [MB] (10 MBps) [2024-11-29T10:25:44.560Z] Copying: 174/256 [MB] (10 MBps) [2024-11-29T10:25:45.503Z] Copying: 185/256 [MB] (10 MBps) [2024-11-29T10:25:46.449Z] Copying: 195/256 [MB] (10 MBps) [2024-11-29T10:25:47.396Z] Copying: 205/256 [MB] (10 MBps) [2024-11-29T10:25:48.391Z] Copying: 216/256 [MB] (10 MBps) [2024-11-29T10:25:49.388Z] Copying: 226/256 [MB] (10 MBps) [2024-11-29T10:25:50.334Z] Copying: 237/256 [MB] (10 MBps) [2024-11-29T10:25:50.912Z] Copying: 249/256 [MB] (12 MBps) [2024-11-29T10:25:50.912Z] Copying: 256/256 [MB] (average 12 MBps)[2024-11-29 10:25:50.872274] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:11.447 [2024-11-29 10:25:50.874310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.447 [2024-11-29 10:25:50.874367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:11.447 [2024-11-29 10:25:50.874382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:11.447 [2024-11-29 10:25:50.874391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.447 [2024-11-29 10:25:50.874414] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:11.447 [2024-11-29 10:25:50.875199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.447 [2024-11-29 10:25:50.875248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:11.447 [2024-11-29 10:25:50.875261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:20:11.447 [2024-11-29 10:25:50.875271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.447 [2024-11-29 10:25:50.875558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.447 [2024-11-29 10:25:50.875579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:11.447 [2024-11-29 10:25:50.875592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:20:11.447 [2024-11-29 10:25:50.875600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.447 [2024-11-29 10:25:50.879342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.447 [2024-11-29 10:25:50.879367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:11.447 [2024-11-29 10:25:50.879378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.727 ms 00:20:11.447 [2024-11-29 10:25:50.879386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.447 [2024-11-29 10:25:50.886385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.447 [2024-11-29 10:25:50.886425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:11.447 [2024-11-29 10:25:50.886447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.980 ms 00:20:11.447 [2024-11-29 10:25:50.886458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.447 [2024-11-29 10:25:50.889410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.447 [2024-11-29 10:25:50.889462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:11.447 [2024-11-29 10:25:50.889473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.887 ms 00:20:11.447 [2024-11-29 10:25:50.889480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.447 [2024-11-29 10:25:50.894447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.447 [2024-11-29 10:25:50.894501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:11.447 [2024-11-29 10:25:50.894512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.918 ms 00:20:11.447 [2024-11-29 10:25:50.894521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.447 [2024-11-29 10:25:50.894658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.447 [2024-11-29 10:25:50.894669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:11.447 [2024-11-29 10:25:50.894692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:11.447 [2024-11-29 10:25:50.894700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.447 [2024-11-29 10:25:50.897793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.447 [2024-11-29 10:25:50.897849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:11.447 [2024-11-29 10:25:50.897860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.074 ms 00:20:11.447 [2024-11-29 10:25:50.897868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.447 [2024-11-29 10:25:50.900491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.447 [2024-11-29 10:25:50.900685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:11.447 [2024-11-29 10:25:50.900704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.574 ms 00:20:11.447 [2024-11-29 10:25:50.900712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.447 [2024-11-29 10:25:50.902839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.447 [2024-11-29 10:25:50.902885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:11.447 [2024-11-29 10:25:50.902894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.085 ms 00:20:11.447 [2024-11-29 10:25:50.902901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.447 [2024-11-29 10:25:50.905122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.447 [2024-11-29 10:25:50.905168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:11.447 [2024-11-29 10:25:50.905178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.141 ms 00:20:11.447 [2024-11-29 10:25:50.905185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.447 [2024-11-29 10:25:50.905227] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:11.447 [2024-11-29 10:25:50.905243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:11.447 [2024-11-29 10:25:50.905503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.905994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.906003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.906011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.906020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.906028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.906035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.906043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:11.448 [2024-11-29 10:25:50.906059] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:11.448 [2024-11-29 10:25:50.906067] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: db1a25ec-2b75-4dc5-810a-a3dfa0443739 00:20:11.448 [2024-11-29 10:25:50.906076] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:11.448 [2024-11-29 10:25:50.906084] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:11.448 [2024-11-29 10:25:50.906091] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:11.448 [2024-11-29 10:25:50.906100] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:11.448 [2024-11-29 10:25:50.906127] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:11.448 [2024-11-29 10:25:50.906139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:11.448 [2024-11-29 10:25:50.906147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:11.448 [2024-11-29 10:25:50.906154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:11.448 [2024-11-29 10:25:50.906162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:11.448 [2024-11-29 10:25:50.906169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.448 [2024-11-29 10:25:50.906176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:11.448 [2024-11-29 10:25:50.906186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:20:11.448 [2024-11-29 10:25:50.906193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.711 [2024-11-29 10:25:50.908502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.711 [2024-11-29 10:25:50.908533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:11.711 [2024-11-29 10:25:50.908545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.274 ms 00:20:11.711 [2024-11-29 10:25:50.908559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.711 [2024-11-29 10:25:50.908701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.711 [2024-11-29 10:25:50.908711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:11.711 [2024-11-29 10:25:50.908721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:11.711 [2024-11-29 10:25:50.908735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.711 [2024-11-29 10:25:50.916691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.711 [2024-11-29 10:25:50.916743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:11.711 [2024-11-29 10:25:50.916755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.711 [2024-11-29 10:25:50.916769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.711 [2024-11-29 10:25:50.916865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.711 [2024-11-29 10:25:50.916876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:11.711 [2024-11-29 10:25:50.916884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.711 [2024-11-29 10:25:50.916892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.711 [2024-11-29 10:25:50.916949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.711 [2024-11-29 10:25:50.916958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:11.711 [2024-11-29 10:25:50.916966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.711 [2024-11-29 10:25:50.916975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.711 [2024-11-29 10:25:50.916995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.711 [2024-11-29 10:25:50.917003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:11.711 [2024-11-29 10:25:50.917010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.711 [2024-11-29 10:25:50.917018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.711 [2024-11-29 10:25:50.930567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.711 [2024-11-29 10:25:50.930617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:11.711 [2024-11-29 10:25:50.930629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.711 [2024-11-29 10:25:50.930643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.711 [2024-11-29 10:25:50.940649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.711 [2024-11-29 10:25:50.940699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:11.711 [2024-11-29 10:25:50.940710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.711 [2024-11-29 10:25:50.940718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.711 [2024-11-29 10:25:50.940774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.711 [2024-11-29 10:25:50.940784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.711 [2024-11-29 10:25:50.940792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.711 [2024-11-29 10:25:50.940823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.711 [2024-11-29 10:25:50.940856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.711 [2024-11-29 10:25:50.940869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.711 [2024-11-29 10:25:50.940878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.711 [2024-11-29 10:25:50.940885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.711 [2024-11-29 10:25:50.940966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.711 [2024-11-29 10:25:50.940976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.711 [2024-11-29 10:25:50.940992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.711 [2024-11-29 10:25:50.941000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.711 [2024-11-29 10:25:50.941034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.711 [2024-11-29 10:25:50.941046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:11.711 [2024-11-29 10:25:50.941054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.711 [2024-11-29 10:25:50.941061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.711 [2024-11-29 10:25:50.941102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.711 [2024-11-29 10:25:50.941115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:11.711 [2024-11-29 10:25:50.941123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.711 [2024-11-29 10:25:50.941131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.711 [2024-11-29 10:25:50.941176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.711 [2024-11-29 10:25:50.941189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:11.711 [2024-11-29 10:25:50.941201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.711 [2024-11-29 10:25:50.941209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.711 [2024-11-29 10:25:50.941366] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.022 ms, result 0 00:20:11.711 00:20:11.711 00:20:11.711 10:25:51 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:11.711 10:25:51 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:12.286 10:25:51 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:12.548 [2024-11-29 10:25:51.771623] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:20:12.548 [2024-11-29 10:25:51.772485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87891 ] 00:20:12.548 [2024-11-29 10:25:51.915737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.548 [2024-11-29 10:25:51.939969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.810 [2024-11-29 10:25:52.041428] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:12.810 [2024-11-29 10:25:52.041487] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:12.810 [2024-11-29 10:25:52.200435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.810 [2024-11-29 10:25:52.200490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:12.810 [2024-11-29 10:25:52.200504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:12.810 [2024-11-29 10:25:52.200512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.810 [2024-11-29 10:25:52.202928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-29 10:25:52.203097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:12.811 [2024-11-29 10:25:52.203116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.397 ms 00:20:12.811 [2024-11-29 10:25:52.203124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-29 10:25:52.203524] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:12.811 [2024-11-29 10:25:52.203857] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:12.811 [2024-11-29 10:25:52.203891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-29 10:25:52.203904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:12.811 [2024-11-29 10:25:52.203914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:20:12.811 [2024-11-29 10:25:52.203925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-29 10:25:52.206290] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:12.811 [2024-11-29 10:25:52.209680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-29 10:25:52.209733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:12.811 [2024-11-29 10:25:52.209748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.393 ms 00:20:12.811 [2024-11-29 10:25:52.209756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-29 10:25:52.209867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-29 10:25:52.209883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:12.811 [2024-11-29 10:25:52.209893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:12.811 [2024-11-29 10:25:52.209900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-29 10:25:52.216856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-29 10:25:52.216893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:12.811 [2024-11-29 10:25:52.216903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.905 ms 00:20:12.811 [2024-11-29 10:25:52.216918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-29 10:25:52.217051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-29 10:25:52.217063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:12.811 [2024-11-29 10:25:52.217072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:12.811 [2024-11-29 10:25:52.217085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-29 10:25:52.217112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-29 10:25:52.217120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:12.811 [2024-11-29 10:25:52.217128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:12.811 [2024-11-29 10:25:52.217135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-29 10:25:52.217160] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:12.811 [2024-11-29 10:25:52.218906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-29 10:25:52.218937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:12.811 [2024-11-29 10:25:52.218947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.754 ms 00:20:12.811 [2024-11-29 10:25:52.218959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-29 10:25:52.219000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-29 10:25:52.219009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:12.811 [2024-11-29 10:25:52.219017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:12.811 [2024-11-29 10:25:52.219024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-29 10:25:52.219042] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:12.811 [2024-11-29 10:25:52.219061] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:12.811 [2024-11-29 10:25:52.219105] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:12.811 [2024-11-29 10:25:52.219123] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:12.811 [2024-11-29 10:25:52.219227] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:12.811 [2024-11-29 10:25:52.219238] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:12.811 [2024-11-29 10:25:52.219249] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:12.811 [2024-11-29 10:25:52.219260] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:12.811 [2024-11-29 10:25:52.219269] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:12.811 [2024-11-29 10:25:52.219277] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:12.811 [2024-11-29 10:25:52.219284] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:12.811 [2024-11-29 10:25:52.219291] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:12.811 [2024-11-29 10:25:52.219301] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:12.811 [2024-11-29 10:25:52.219314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-29 10:25:52.219322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:12.811 [2024-11-29 10:25:52.219329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:20:12.811 [2024-11-29 10:25:52.219337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-29 10:25:52.219424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-29 10:25:52.219433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:12.811 [2024-11-29 10:25:52.219441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:12.811 [2024-11-29 10:25:52.219448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-29 10:25:52.219549] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:12.811 [2024-11-29 10:25:52.219563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:12.811 [2024-11-29 10:25:52.219575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:12.811 [2024-11-29 10:25:52.219584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.811 [2024-11-29 10:25:52.219593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:12.811 [2024-11-29 10:25:52.219600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:12.811 [2024-11-29 10:25:52.219608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:12.811 [2024-11-29 10:25:52.219620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:12.811 [2024-11-29 10:25:52.219628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:12.811 [2024-11-29 10:25:52.219636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:12.811 [2024-11-29 10:25:52.219644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:12.811 [2024-11-29 10:25:52.219651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:12.811 [2024-11-29 10:25:52.219658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:12.811 [2024-11-29 10:25:52.219667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:12.811 [2024-11-29 10:25:52.219674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:12.811 [2024-11-29 10:25:52.219682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.811 [2024-11-29 10:25:52.219691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:12.811 [2024-11-29 10:25:52.219698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:12.811 [2024-11-29 10:25:52.219706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.811 [2024-11-29 10:25:52.219715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:12.811 [2024-11-29 10:25:52.219722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:12.811 [2024-11-29 10:25:52.219730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.811 [2024-11-29 10:25:52.219738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:12.811 [2024-11-29 10:25:52.219752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:12.811 [2024-11-29 10:25:52.219760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.811 [2024-11-29 10:25:52.219767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:12.811 [2024-11-29 10:25:52.219775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:12.811 [2024-11-29 10:25:52.219783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.811 [2024-11-29 10:25:52.219790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:12.811 [2024-11-29 10:25:52.219817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:12.811 [2024-11-29 10:25:52.219826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.811 [2024-11-29 10:25:52.219834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:12.811 [2024-11-29 10:25:52.219842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:12.811 [2024-11-29 10:25:52.219850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:12.811 [2024-11-29 10:25:52.219857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:12.811 [2024-11-29 10:25:52.219865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:12.811 [2024-11-29 10:25:52.219873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:12.812 [2024-11-29 10:25:52.219881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:12.812 [2024-11-29 10:25:52.219905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:12.812 [2024-11-29 10:25:52.219915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.812 [2024-11-29 10:25:52.219924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:12.812 [2024-11-29 10:25:52.219932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:12.812 [2024-11-29 10:25:52.219940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.812 [2024-11-29 10:25:52.219952] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:12.812 [2024-11-29 10:25:52.219960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:12.812 [2024-11-29 10:25:52.219967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:12.812 [2024-11-29 10:25:52.219974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.812 [2024-11-29 10:25:52.219982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:12.812 [2024-11-29 10:25:52.219991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:12.812 [2024-11-29 10:25:52.219998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:12.812 [2024-11-29 10:25:52.220005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:12.812 [2024-11-29 10:25:52.220011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:12.812 [2024-11-29 10:25:52.220018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:12.812 [2024-11-29 10:25:52.220027] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:12.812 [2024-11-29 10:25:52.220036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:12.812 [2024-11-29 10:25:52.220047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:12.812 [2024-11-29 10:25:52.220054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:12.812 [2024-11-29 10:25:52.220061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:12.812 [2024-11-29 10:25:52.220069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:12.812 [2024-11-29 10:25:52.220076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:12.812 [2024-11-29 10:25:52.220083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:12.812 [2024-11-29 10:25:52.220090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:12.812 [2024-11-29 10:25:52.220102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:12.812 [2024-11-29 10:25:52.220109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:12.812 [2024-11-29 10:25:52.220116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:12.812 [2024-11-29 10:25:52.220123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:12.812 [2024-11-29 10:25:52.220130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:12.812 [2024-11-29 10:25:52.220137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:12.812 [2024-11-29 10:25:52.220145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:12.812 [2024-11-29 10:25:52.220152] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:12.812 [2024-11-29 10:25:52.220162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:12.812 [2024-11-29 10:25:52.220173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:12.812 [2024-11-29 10:25:52.220180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:12.812 [2024-11-29 10:25:52.220187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:12.812 [2024-11-29 10:25:52.220193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:12.812 [2024-11-29 10:25:52.220200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.812 [2024-11-29 10:25:52.220208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:12.812 [2024-11-29 10:25:52.220216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:20:12.812 [2024-11-29 10:25:52.220223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.812 [2024-11-29 10:25:52.232363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.812 [2024-11-29 10:25:52.232407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:12.812 [2024-11-29 10:25:52.232427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.090 ms 00:20:12.812 [2024-11-29 10:25:52.232435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.812 [2024-11-29 10:25:52.232565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.812 [2024-11-29 10:25:52.232583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:12.812 [2024-11-29 10:25:52.232591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:12.812 [2024-11-29 10:25:52.232599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.812 [2024-11-29 10:25:52.252573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.812 [2024-11-29 10:25:52.252825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:12.812 [2024-11-29 10:25:52.252854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.951 ms 00:20:12.812 [2024-11-29 10:25:52.252867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.812 [2024-11-29 10:25:52.252999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.812 [2024-11-29 10:25:52.253017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:12.812 [2024-11-29 10:25:52.253030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:12.812 [2024-11-29 10:25:52.253047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.812 [2024-11-29 10:25:52.253571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.812 [2024-11-29 10:25:52.253613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:12.812 [2024-11-29 10:25:52.253629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:20:12.812 [2024-11-29 10:25:52.253643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.812 [2024-11-29 10:25:52.253865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.812 [2024-11-29 10:25:52.253884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:12.812 [2024-11-29 10:25:52.253896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:20:12.812 [2024-11-29 10:25:52.253907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.812 [2024-11-29 10:25:52.262595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.812 [2024-11-29 10:25:52.262645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:12.812 [2024-11-29 10:25:52.262668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.655 ms 00:20:12.812 [2024-11-29 10:25:52.262677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.812 [2024-11-29 10:25:52.266593] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:12.812 [2024-11-29 10:25:52.266648] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:12.812 [2024-11-29 10:25:52.266667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.812 [2024-11-29 10:25:52.266676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:12.812 [2024-11-29 10:25:52.266685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.880 ms 00:20:12.812 [2024-11-29 10:25:52.266693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.075 [2024-11-29 10:25:52.286915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.075 [2024-11-29 10:25:52.286980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:13.075 [2024-11-29 10:25:52.286995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.133 ms 00:20:13.075 [2024-11-29 10:25:52.287005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.075 [2024-11-29 10:25:52.290325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.075 [2024-11-29 10:25:52.290379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:13.075 [2024-11-29 10:25:52.290391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.199 ms 00:20:13.075 [2024-11-29 10:25:52.290399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.075 [2024-11-29 10:25:52.293474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.075 [2024-11-29 10:25:52.293526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:13.075 [2024-11-29 10:25:52.293536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.005 ms 00:20:13.075 [2024-11-29 10:25:52.293543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.075 [2024-11-29 10:25:52.293938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.075 [2024-11-29 10:25:52.293954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:13.075 [2024-11-29 10:25:52.293967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:20:13.075 [2024-11-29 10:25:52.293974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.075 [2024-11-29 10:25:52.318268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.075 [2024-11-29 10:25:52.318330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:13.075 [2024-11-29 10:25:52.318352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.268 ms 00:20:13.075 [2024-11-29 10:25:52.318365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.075 [2024-11-29 10:25:52.326888] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:13.075 [2024-11-29 10:25:52.346875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.075 [2024-11-29 10:25:52.346929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:13.075 [2024-11-29 10:25:52.346943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.403 ms 00:20:13.075 [2024-11-29 10:25:52.346952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.075 [2024-11-29 10:25:52.347052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.075 [2024-11-29 10:25:52.347063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:13.075 [2024-11-29 10:25:52.347076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:13.076 [2024-11-29 10:25:52.347085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.076 [2024-11-29 10:25:52.347149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.076 [2024-11-29 10:25:52.347164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:13.076 [2024-11-29 10:25:52.347173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:13.076 [2024-11-29 10:25:52.347182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.076 [2024-11-29 10:25:52.347210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.076 [2024-11-29 10:25:52.347220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:13.076 [2024-11-29 10:25:52.347229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:13.076 [2024-11-29 10:25:52.347240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.076 [2024-11-29 10:25:52.347278] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:13.076 [2024-11-29 10:25:52.347289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.076 [2024-11-29 10:25:52.347298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:13.076 [2024-11-29 10:25:52.347306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:13.076 [2024-11-29 10:25:52.347315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.076 [2024-11-29 10:25:52.353345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.076 [2024-11-29 10:25:52.353402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:13.076 [2024-11-29 10:25:52.353417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.007 ms 00:20:13.076 [2024-11-29 10:25:52.353425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.076 [2024-11-29 10:25:52.353522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.076 [2024-11-29 10:25:52.353533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:13.076 [2024-11-29 10:25:52.353542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:13.076 [2024-11-29 10:25:52.353550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.076 [2024-11-29 10:25:52.354588] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:13.076 [2024-11-29 10:25:52.355971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.839 ms, result 0 00:20:13.076 [2024-11-29 10:25:52.357722] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:13.076 [2024-11-29 10:25:52.364888] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:13.341  [2024-11-29T10:25:52.806Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-11-29 10:25:52.748658] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:13.341 [2024-11-29 10:25:52.750070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.341 [2024-11-29 10:25:52.750139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:13.341 [2024-11-29 10:25:52.750153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:13.341 [2024-11-29 10:25:52.750162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.341 [2024-11-29 10:25:52.750186] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:13.341 [2024-11-29 10:25:52.750926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.341 [2024-11-29 10:25:52.750959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:13.341 [2024-11-29 10:25:52.750972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:20:13.341 [2024-11-29 10:25:52.750981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.341 [2024-11-29 10:25:52.753677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.341 [2024-11-29 10:25:52.753730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:13.341 [2024-11-29 10:25:52.753750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.665 ms 00:20:13.341 [2024-11-29 10:25:52.753759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.341 [2024-11-29 10:25:52.758074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.341 [2024-11-29 10:25:52.758111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:13.341 [2024-11-29 10:25:52.758133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.297 ms 00:20:13.341 [2024-11-29 10:25:52.758141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.341 [2024-11-29 10:25:52.765056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.341 [2024-11-29 10:25:52.765100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:13.341 [2024-11-29 10:25:52.765112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.878 ms 00:20:13.341 [2024-11-29 10:25:52.765128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.341 [2024-11-29 10:25:52.768301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.341 [2024-11-29 10:25:52.768355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:13.341 [2024-11-29 10:25:52.768366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.111 ms 00:20:13.341 [2024-11-29 10:25:52.768373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.341 [2024-11-29 10:25:52.773617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.341 [2024-11-29 10:25:52.773674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:13.341 [2024-11-29 10:25:52.773686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.193 ms 00:20:13.341 [2024-11-29 10:25:52.773694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.341 [2024-11-29 10:25:52.773853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.341 [2024-11-29 10:25:52.773865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:13.341 [2024-11-29 10:25:52.773883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:20:13.341 [2024-11-29 10:25:52.773890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.341 [2024-11-29 10:25:52.777438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.341 [2024-11-29 10:25:52.777491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:13.341 [2024-11-29 10:25:52.777501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.528 ms 00:20:13.341 [2024-11-29 10:25:52.777509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.341 [2024-11-29 10:25:52.780520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.341 [2024-11-29 10:25:52.780575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:13.341 [2024-11-29 10:25:52.780586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.946 ms 00:20:13.341 [2024-11-29 10:25:52.780592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.341 [2024-11-29 10:25:52.782884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.341 [2024-11-29 10:25:52.783091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:13.341 [2024-11-29 10:25:52.783110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.242 ms 00:20:13.341 [2024-11-29 10:25:52.783117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.341 [2024-11-29 10:25:52.785517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.341 [2024-11-29 10:25:52.785573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:13.341 [2024-11-29 10:25:52.785583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.291 ms 00:20:13.341 [2024-11-29 10:25:52.785590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.341 [2024-11-29 10:25:52.785635] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:13.341 [2024-11-29 10:25:52.785652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:13.341 [2024-11-29 10:25:52.785943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.785951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.785959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.785967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.785975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.785983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.785990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.785998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:13.342 [2024-11-29 10:25:52.786478] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:13.342 [2024-11-29 10:25:52.786491] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: db1a25ec-2b75-4dc5-810a-a3dfa0443739 00:20:13.342 [2024-11-29 10:25:52.786500] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:13.342 [2024-11-29 10:25:52.786508] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:13.342 [2024-11-29 10:25:52.786515] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:13.342 [2024-11-29 10:25:52.786523] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:13.342 [2024-11-29 10:25:52.786535] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:13.342 [2024-11-29 10:25:52.786553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:13.342 [2024-11-29 10:25:52.786561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:13.342 [2024-11-29 10:25:52.786567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:13.342 [2024-11-29 10:25:52.786573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:13.342 [2024-11-29 10:25:52.786581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.342 [2024-11-29 10:25:52.786589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:13.342 [2024-11-29 10:25:52.786598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:20:13.342 [2024-11-29 10:25:52.786605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.342 [2024-11-29 10:25:52.789212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.342 [2024-11-29 10:25:52.789378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:13.342 [2024-11-29 10:25:52.789443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.586 ms 00:20:13.342 [2024-11-29 10:25:52.789476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.342 [2024-11-29 10:25:52.789632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.342 [2024-11-29 10:25:52.790047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:13.342 [2024-11-29 10:25:52.790078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:13.342 [2024-11-29 10:25:52.790099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.342 [2024-11-29 10:25:52.798469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.342 [2024-11-29 10:25:52.798645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:13.342 [2024-11-29 10:25:52.798720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.342 [2024-11-29 10:25:52.798744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.342 [2024-11-29 10:25:52.798879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.342 [2024-11-29 10:25:52.798905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.343 [2024-11-29 10:25:52.798927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.343 [2024-11-29 10:25:52.798952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.343 [2024-11-29 10:25:52.799025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.343 [2024-11-29 10:25:52.799179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.343 [2024-11-29 10:25:52.799200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.343 [2024-11-29 10:25:52.799218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.343 [2024-11-29 10:25:52.799252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.343 [2024-11-29 10:25:52.799273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.343 [2024-11-29 10:25:52.799282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.343 [2024-11-29 10:25:52.799290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.605 [2024-11-29 10:25:52.813038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.605 [2024-11-29 10:25:52.813094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.605 [2024-11-29 10:25:52.813106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.605 [2024-11-29 10:25:52.813121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.605 [2024-11-29 10:25:52.823241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.605 [2024-11-29 10:25:52.823294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.605 [2024-11-29 10:25:52.823306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.605 [2024-11-29 10:25:52.823314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.605 [2024-11-29 10:25:52.823360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.605 [2024-11-29 10:25:52.823370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:13.605 [2024-11-29 10:25:52.823390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.605 [2024-11-29 10:25:52.823398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.605 [2024-11-29 10:25:52.823435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.605 [2024-11-29 10:25:52.823444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:13.605 [2024-11-29 10:25:52.823453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.605 [2024-11-29 10:25:52.823460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.605 [2024-11-29 10:25:52.823530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.605 [2024-11-29 10:25:52.823540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:13.605 [2024-11-29 10:25:52.823556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.605 [2024-11-29 10:25:52.823564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.605 [2024-11-29 10:25:52.823606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.605 [2024-11-29 10:25:52.823619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:13.605 [2024-11-29 10:25:52.823626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.605 [2024-11-29 10:25:52.823634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.605 [2024-11-29 10:25:52.823674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.605 [2024-11-29 10:25:52.823684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:13.605 [2024-11-29 10:25:52.823692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.605 [2024-11-29 10:25:52.823700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.605 [2024-11-29 10:25:52.823748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.605 [2024-11-29 10:25:52.823759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:13.605 [2024-11-29 10:25:52.823768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.605 [2024-11-29 10:25:52.823780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.605 [2024-11-29 10:25:52.824016] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.915 ms, result 0 00:20:13.605 00:20:13.605 00:20:13.605 10:25:53 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=87911 00:20:13.605 10:25:53 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 87911 00:20:13.605 10:25:53 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:13.605 10:25:53 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87911 ']' 00:20:13.605 10:25:53 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.605 10:25:53 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.605 10:25:53 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.605 10:25:53 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.605 10:25:53 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:13.868 [2024-11-29 10:25:53.128987] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:20:13.868 [2024-11-29 10:25:53.129617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87911 ] 00:20:13.868 [2024-11-29 10:25:53.276601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.868 [2024-11-29 10:25:53.305952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.814 10:25:53 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.814 10:25:53 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:14.814 10:25:53 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:14.814 [2024-11-29 10:25:54.203492] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:14.814 [2024-11-29 10:25:54.203586] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:15.078 [2024-11-29 10:25:54.381755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.078 [2024-11-29 10:25:54.381850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:15.078 [2024-11-29 10:25:54.381870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:15.078 [2024-11-29 10:25:54.381882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.078 [2024-11-29 10:25:54.384986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.078 [2024-11-29 10:25:54.385054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.078 [2024-11-29 10:25:54.385068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.081 ms 00:20:15.078 [2024-11-29 10:25:54.385078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.078 [2024-11-29 10:25:54.385241] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:15.078 [2024-11-29 10:25:54.385523] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:15.078 [2024-11-29 10:25:54.385540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.078 [2024-11-29 10:25:54.385553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.078 [2024-11-29 10:25:54.385564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:20:15.078 [2024-11-29 10:25:54.385574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.078 [2024-11-29 10:25:54.387415] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:15.078 [2024-11-29 10:25:54.391446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.078 [2024-11-29 10:25:54.391503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:15.078 [2024-11-29 10:25:54.391517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.026 ms 00:20:15.078 [2024-11-29 10:25:54.391526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.078 [2024-11-29 10:25:54.391618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.078 [2024-11-29 10:25:54.391629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:15.078 [2024-11-29 10:25:54.391642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:15.078 [2024-11-29 10:25:54.391651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.078 [2024-11-29 10:25:54.400275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.078 [2024-11-29 10:25:54.400323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.078 [2024-11-29 10:25:54.400340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.564 ms 00:20:15.078 [2024-11-29 10:25:54.400348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.078 [2024-11-29 10:25:54.400493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.078 [2024-11-29 10:25:54.400504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.078 [2024-11-29 10:25:54.400519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:15.078 [2024-11-29 10:25:54.400526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.078 [2024-11-29 10:25:54.400564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.078 [2024-11-29 10:25:54.400575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:15.078 [2024-11-29 10:25:54.400585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:15.078 [2024-11-29 10:25:54.400593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.078 [2024-11-29 10:25:54.400618] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:15.078 [2024-11-29 10:25:54.402702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.078 [2024-11-29 10:25:54.402747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.078 [2024-11-29 10:25:54.402760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.091 ms 00:20:15.078 [2024-11-29 10:25:54.402770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.078 [2024-11-29 10:25:54.402830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.078 [2024-11-29 10:25:54.402841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:15.078 [2024-11-29 10:25:54.402851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:15.078 [2024-11-29 10:25:54.402860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.078 [2024-11-29 10:25:54.402883] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:15.078 [2024-11-29 10:25:54.402906] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:15.078 [2024-11-29 10:25:54.402942] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:15.078 [2024-11-29 10:25:54.402964] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:15.079 [2024-11-29 10:25:54.403071] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:15.079 [2024-11-29 10:25:54.403084] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:15.079 [2024-11-29 10:25:54.403096] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:15.079 [2024-11-29 10:25:54.403109] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:15.079 [2024-11-29 10:25:54.403119] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:15.079 [2024-11-29 10:25:54.403139] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:15.079 [2024-11-29 10:25:54.403147] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:15.079 [2024-11-29 10:25:54.403160] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:15.079 [2024-11-29 10:25:54.403167] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:15.079 [2024-11-29 10:25:54.403176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.079 [2024-11-29 10:25:54.403185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:15.079 [2024-11-29 10:25:54.403195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:20:15.079 [2024-11-29 10:25:54.403202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.079 [2024-11-29 10:25:54.403292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.079 [2024-11-29 10:25:54.403300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:15.079 [2024-11-29 10:25:54.403309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:15.079 [2024-11-29 10:25:54.403316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.079 [2024-11-29 10:25:54.403421] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:15.079 [2024-11-29 10:25:54.403439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:15.079 [2024-11-29 10:25:54.403452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:15.079 [2024-11-29 10:25:54.403461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.079 [2024-11-29 10:25:54.403474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:15.079 [2024-11-29 10:25:54.403483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:15.079 [2024-11-29 10:25:54.403493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:15.079 [2024-11-29 10:25:54.403501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:15.079 [2024-11-29 10:25:54.403512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:15.079 [2024-11-29 10:25:54.403519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:15.079 [2024-11-29 10:25:54.403529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:15.079 [2024-11-29 10:25:54.403536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:15.079 [2024-11-29 10:25:54.403549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:15.079 [2024-11-29 10:25:54.403557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:15.079 [2024-11-29 10:25:54.403566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:15.079 [2024-11-29 10:25:54.403573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.079 [2024-11-29 10:25:54.403584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:15.079 [2024-11-29 10:25:54.403591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:15.079 [2024-11-29 10:25:54.403600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.079 [2024-11-29 10:25:54.403609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:15.079 [2024-11-29 10:25:54.403621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:15.079 [2024-11-29 10:25:54.403629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.079 [2024-11-29 10:25:54.403640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:15.079 [2024-11-29 10:25:54.403648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:15.079 [2024-11-29 10:25:54.403658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.079 [2024-11-29 10:25:54.403665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:15.079 [2024-11-29 10:25:54.403675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:15.079 [2024-11-29 10:25:54.403684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.079 [2024-11-29 10:25:54.403695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:15.079 [2024-11-29 10:25:54.403703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:15.079 [2024-11-29 10:25:54.403714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.079 [2024-11-29 10:25:54.403722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:15.079 [2024-11-29 10:25:54.403731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:15.079 [2024-11-29 10:25:54.403739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:15.079 [2024-11-29 10:25:54.403749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:15.079 [2024-11-29 10:25:54.403756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:15.079 [2024-11-29 10:25:54.403768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:15.079 [2024-11-29 10:25:54.403776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:15.079 [2024-11-29 10:25:54.403787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:15.079 [2024-11-29 10:25:54.403794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.079 [2024-11-29 10:25:54.403814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:15.079 [2024-11-29 10:25:54.403822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:15.079 [2024-11-29 10:25:54.403830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.079 [2024-11-29 10:25:54.403837] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:15.079 [2024-11-29 10:25:54.403846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:15.079 [2024-11-29 10:25:54.403853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:15.079 [2024-11-29 10:25:54.403862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.079 [2024-11-29 10:25:54.403870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:15.079 [2024-11-29 10:25:54.403878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:15.079 [2024-11-29 10:25:54.403885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:15.079 [2024-11-29 10:25:54.403894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:15.079 [2024-11-29 10:25:54.403900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:15.079 [2024-11-29 10:25:54.403910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:15.079 [2024-11-29 10:25:54.403918] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:15.079 [2024-11-29 10:25:54.403933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:15.079 [2024-11-29 10:25:54.403944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:15.079 [2024-11-29 10:25:54.403954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:15.079 [2024-11-29 10:25:54.403961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:15.079 [2024-11-29 10:25:54.403970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:15.079 [2024-11-29 10:25:54.403978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:15.079 [2024-11-29 10:25:54.403990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:15.079 [2024-11-29 10:25:54.403997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:15.079 [2024-11-29 10:25:54.404008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:15.079 [2024-11-29 10:25:54.404015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:15.079 [2024-11-29 10:25:54.404025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:15.079 [2024-11-29 10:25:54.404032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:15.079 [2024-11-29 10:25:54.404047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:15.079 [2024-11-29 10:25:54.404054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:15.079 [2024-11-29 10:25:54.404065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:15.079 [2024-11-29 10:25:54.404072] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:15.079 [2024-11-29 10:25:54.404082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:15.079 [2024-11-29 10:25:54.404090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:15.079 [2024-11-29 10:25:54.404099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:15.079 [2024-11-29 10:25:54.404106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:15.079 [2024-11-29 10:25:54.404116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:15.079 [2024-11-29 10:25:54.404123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.079 [2024-11-29 10:25:54.404132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:15.079 [2024-11-29 10:25:54.404140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:20:15.080 [2024-11-29 10:25:54.404149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.417588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.417633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.080 [2024-11-29 10:25:54.417644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.381 ms 00:20:15.080 [2024-11-29 10:25:54.417658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.417789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.417841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:15.080 [2024-11-29 10:25:54.417851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:15.080 [2024-11-29 10:25:54.417861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.429582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.429630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.080 [2024-11-29 10:25:54.429641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.694 ms 00:20:15.080 [2024-11-29 10:25:54.429654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.429721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.429737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.080 [2024-11-29 10:25:54.429745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:15.080 [2024-11-29 10:25:54.429755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.430279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.430305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.080 [2024-11-29 10:25:54.430315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:20:15.080 [2024-11-29 10:25:54.430331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.430478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.430500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.080 [2024-11-29 10:25:54.430510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:20:15.080 [2024-11-29 10:25:54.430522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.438303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.438348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.080 [2024-11-29 10:25:54.438358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.755 ms 00:20:15.080 [2024-11-29 10:25:54.438367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.452419] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:15.080 [2024-11-29 10:25:54.452483] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:15.080 [2024-11-29 10:25:54.452504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.452517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:15.080 [2024-11-29 10:25:54.452529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.028 ms 00:20:15.080 [2024-11-29 10:25:54.452540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.468752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.468970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:15.080 [2024-11-29 10:25:54.468993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.147 ms 00:20:15.080 [2024-11-29 10:25:54.469007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.472171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.472347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:15.080 [2024-11-29 10:25:54.472365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.975 ms 00:20:15.080 [2024-11-29 10:25:54.472375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.475168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.475222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:15.080 [2024-11-29 10:25:54.475231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.747 ms 00:20:15.080 [2024-11-29 10:25:54.475240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.475596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.475609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:15.080 [2024-11-29 10:25:54.475619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:20:15.080 [2024-11-29 10:25:54.475628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.498301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.498524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:15.080 [2024-11-29 10:25:54.498591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.652 ms 00:20:15.080 [2024-11-29 10:25:54.498629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.507043] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:15.080 [2024-11-29 10:25:54.521312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.521430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:15.080 [2024-11-29 10:25:54.521482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.477 ms 00:20:15.080 [2024-11-29 10:25:54.521505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.521591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.521619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:15.080 [2024-11-29 10:25:54.521640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:15.080 [2024-11-29 10:25:54.521659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.521722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.521732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:15.080 [2024-11-29 10:25:54.521742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:15.080 [2024-11-29 10:25:54.521748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.521773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.521785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:15.080 [2024-11-29 10:25:54.521825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:15.080 [2024-11-29 10:25:54.521833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.521869] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:15.080 [2024-11-29 10:25:54.521877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.521886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:15.080 [2024-11-29 10:25:54.521894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:15.080 [2024-11-29 10:25:54.521902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.526139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.526250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:15.080 [2024-11-29 10:25:54.526299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.214 ms 00:20:15.080 [2024-11-29 10:25:54.526326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.526656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.080 [2024-11-29 10:25:54.526728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:15.080 [2024-11-29 10:25:54.526821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:15.080 [2024-11-29 10:25:54.526849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.080 [2024-11-29 10:25:54.527686] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:15.080 [2024-11-29 10:25:54.528911] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 145.663 ms, result 0 00:20:15.080 [2024-11-29 10:25:54.530462] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:15.342 Some configs were skipped because the RPC state that can call them passed over. 00:20:15.342 10:25:54 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:15.342 [2024-11-29 10:25:54.754455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.342 [2024-11-29 10:25:54.754585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:15.342 [2024-11-29 10:25:54.754606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.820 ms 00:20:15.342 [2024-11-29 10:25:54.754615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.342 [2024-11-29 10:25:54.754652] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.024 ms, result 0 00:20:15.342 true 00:20:15.342 10:25:54 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:15.603 [2024-11-29 10:25:54.962015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.603 [2024-11-29 10:25:54.962060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:15.603 [2024-11-29 10:25:54.962072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.155 ms 00:20:15.603 [2024-11-29 10:25:54.962081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.603 [2024-11-29 10:25:54.962129] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.256 ms, result 0 00:20:15.603 true 00:20:15.603 10:25:54 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 87911 00:20:15.603 10:25:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87911 ']' 00:20:15.603 10:25:54 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87911 00:20:15.603 10:25:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:15.603 10:25:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.603 10:25:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87911 00:20:15.603 killing process with pid 87911 00:20:15.603 10:25:55 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:15.603 10:25:55 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:15.603 10:25:55 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87911' 00:20:15.603 10:25:55 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87911 00:20:15.603 10:25:55 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87911 00:20:15.865 [2024-11-29 10:25:55.100187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.865 [2024-11-29 10:25:55.100235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:15.865 [2024-11-29 10:25:55.100250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:15.865 [2024-11-29 10:25:55.100258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.865 [2024-11-29 10:25:55.100283] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:15.865 [2024-11-29 10:25:55.100714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.865 [2024-11-29 10:25:55.100741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:15.865 [2024-11-29 10:25:55.100751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:20:15.865 [2024-11-29 10:25:55.100763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.865 [2024-11-29 10:25:55.101054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.865 [2024-11-29 10:25:55.101073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:15.865 [2024-11-29 10:25:55.101082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:20:15.865 [2024-11-29 10:25:55.101092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.865 [2024-11-29 10:25:55.105588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.865 [2024-11-29 10:25:55.105621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:15.865 [2024-11-29 10:25:55.105631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.477 ms 00:20:15.865 [2024-11-29 10:25:55.105644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.865 [2024-11-29 10:25:55.112607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.865 [2024-11-29 10:25:55.112638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:15.865 [2024-11-29 10:25:55.112648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.931 ms 00:20:15.865 [2024-11-29 10:25:55.112659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.865 [2024-11-29 10:25:55.115043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.865 [2024-11-29 10:25:55.115080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:15.865 [2024-11-29 10:25:55.115089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.314 ms 00:20:15.865 [2024-11-29 10:25:55.115097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.865 [2024-11-29 10:25:55.118690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.865 [2024-11-29 10:25:55.118727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:15.865 [2024-11-29 10:25:55.118739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.558 ms 00:20:15.865 [2024-11-29 10:25:55.118748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.865 [2024-11-29 10:25:55.118884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.865 [2024-11-29 10:25:55.118896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:15.865 [2024-11-29 10:25:55.118909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:15.865 [2024-11-29 10:25:55.118918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.865 [2024-11-29 10:25:55.122110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.865 [2024-11-29 10:25:55.122174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:15.865 [2024-11-29 10:25:55.122185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.173 ms 00:20:15.865 [2024-11-29 10:25:55.122197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.865 [2024-11-29 10:25:55.124717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.865 [2024-11-29 10:25:55.124861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:15.866 [2024-11-29 10:25:55.124876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.482 ms 00:20:15.866 [2024-11-29 10:25:55.124885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.866 [2024-11-29 10:25:55.126913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.866 [2024-11-29 10:25:55.126946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:15.866 [2024-11-29 10:25:55.126955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.994 ms 00:20:15.866 [2024-11-29 10:25:55.126963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.866 [2024-11-29 10:25:55.128855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.866 [2024-11-29 10:25:55.128891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:15.866 [2024-11-29 10:25:55.128900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.830 ms 00:20:15.866 [2024-11-29 10:25:55.128909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.866 [2024-11-29 10:25:55.128941] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:15.866 [2024-11-29 10:25:55.128957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.128966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.128977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.128985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.128994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:15.866 [2024-11-29 10:25:55.129820] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:15.866 [2024-11-29 10:25:55.129827] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: db1a25ec-2b75-4dc5-810a-a3dfa0443739 00:20:15.866 [2024-11-29 10:25:55.129843] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:15.866 [2024-11-29 10:25:55.129851] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:15.866 [2024-11-29 10:25:55.129859] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:15.866 [2024-11-29 10:25:55.129869] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:15.866 [2024-11-29 10:25:55.129877] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:15.866 [2024-11-29 10:25:55.129889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:15.866 [2024-11-29 10:25:55.129897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:15.866 [2024-11-29 10:25:55.129903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:15.867 [2024-11-29 10:25:55.129912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:15.867 [2024-11-29 10:25:55.129919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.867 [2024-11-29 10:25:55.129928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:15.867 [2024-11-29 10:25:55.129936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:20:15.867 [2024-11-29 10:25:55.129946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.867 [2024-11-29 10:25:55.131427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.867 [2024-11-29 10:25:55.131448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:15.867 [2024-11-29 10:25:55.131457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:20:15.867 [2024-11-29 10:25:55.131465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.867 [2024-11-29 10:25:55.131546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.867 [2024-11-29 10:25:55.131559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:15.867 [2024-11-29 10:25:55.131567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:15.867 [2024-11-29 10:25:55.131576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.867 [2024-11-29 10:25:55.136893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.867 [2024-11-29 10:25:55.136928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.867 [2024-11-29 10:25:55.136937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.867 [2024-11-29 10:25:55.136947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.867 [2024-11-29 10:25:55.137022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.867 [2024-11-29 10:25:55.137034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.867 [2024-11-29 10:25:55.137042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.867 [2024-11-29 10:25:55.137053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.867 [2024-11-29 10:25:55.137092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.867 [2024-11-29 10:25:55.137107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.867 [2024-11-29 10:25:55.137114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.867 [2024-11-29 10:25:55.137123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.867 [2024-11-29 10:25:55.137140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.867 [2024-11-29 10:25:55.137149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.867 [2024-11-29 10:25:55.137157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.867 [2024-11-29 10:25:55.137165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.867 [2024-11-29 10:25:55.146555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.867 [2024-11-29 10:25:55.146598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.867 [2024-11-29 10:25:55.146608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.867 [2024-11-29 10:25:55.146623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.867 [2024-11-29 10:25:55.153794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.867 [2024-11-29 10:25:55.153843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.867 [2024-11-29 10:25:55.153853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.867 [2024-11-29 10:25:55.153865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.867 [2024-11-29 10:25:55.153922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.867 [2024-11-29 10:25:55.153934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.867 [2024-11-29 10:25:55.153941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.867 [2024-11-29 10:25:55.153951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.867 [2024-11-29 10:25:55.153982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.867 [2024-11-29 10:25:55.153992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.867 [2024-11-29 10:25:55.153999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.867 [2024-11-29 10:25:55.154008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.867 [2024-11-29 10:25:55.154075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.867 [2024-11-29 10:25:55.154088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.867 [2024-11-29 10:25:55.154095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.867 [2024-11-29 10:25:55.154105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.867 [2024-11-29 10:25:55.154161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.867 [2024-11-29 10:25:55.154174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:15.867 [2024-11-29 10:25:55.154182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.867 [2024-11-29 10:25:55.154193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.867 [2024-11-29 10:25:55.154231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.867 [2024-11-29 10:25:55.154243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.867 [2024-11-29 10:25:55.154251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.867 [2024-11-29 10:25:55.154260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.867 [2024-11-29 10:25:55.154302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.867 [2024-11-29 10:25:55.154314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.867 [2024-11-29 10:25:55.154322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.867 [2024-11-29 10:25:55.154331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.867 [2024-11-29 10:25:55.154462] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.249 ms, result 0 00:20:16.128 10:25:55 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:16.128 [2024-11-29 10:25:55.394983] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:20:16.128 [2024-11-29 10:25:55.395103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87947 ] 00:20:16.128 [2024-11-29 10:25:55.540615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.128 [2024-11-29 10:25:55.560609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.390 [2024-11-29 10:25:55.654505] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.390 [2024-11-29 10:25:55.654577] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.390 [2024-11-29 10:25:55.815246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.390 [2024-11-29 10:25:55.815311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:16.390 [2024-11-29 10:25:55.815326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:16.390 [2024-11-29 10:25:55.815336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.390 [2024-11-29 10:25:55.817933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.390 [2024-11-29 10:25:55.818154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.390 [2024-11-29 10:25:55.818177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.575 ms 00:20:16.390 [2024-11-29 10:25:55.818186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.390 [2024-11-29 10:25:55.818846] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:16.390 [2024-11-29 10:25:55.819196] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:16.390 [2024-11-29 10:25:55.819246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.390 [2024-11-29 10:25:55.819256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.390 [2024-11-29 10:25:55.819271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:20:16.390 [2024-11-29 10:25:55.819282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.390 [2024-11-29 10:25:55.821276] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:16.390 [2024-11-29 10:25:55.825482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.390 [2024-11-29 10:25:55.825537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:16.390 [2024-11-29 10:25:55.825555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.209 ms 00:20:16.390 [2024-11-29 10:25:55.825564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.390 [2024-11-29 10:25:55.825663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.390 [2024-11-29 10:25:55.825678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:16.390 [2024-11-29 10:25:55.825687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:16.390 [2024-11-29 10:25:55.825695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.390 [2024-11-29 10:25:55.834314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.390 [2024-11-29 10:25:55.834363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.390 [2024-11-29 10:25:55.834375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.572 ms 00:20:16.390 [2024-11-29 10:25:55.834387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.390 [2024-11-29 10:25:55.834537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.390 [2024-11-29 10:25:55.834549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.390 [2024-11-29 10:25:55.834558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:16.390 [2024-11-29 10:25:55.834569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.390 [2024-11-29 10:25:55.834600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.390 [2024-11-29 10:25:55.834609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:16.390 [2024-11-29 10:25:55.834617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:16.390 [2024-11-29 10:25:55.834629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.390 [2024-11-29 10:25:55.834652] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:16.390 [2024-11-29 10:25:55.836694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.390 [2024-11-29 10:25:55.836881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.390 [2024-11-29 10:25:55.836900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.049 ms 00:20:16.390 [2024-11-29 10:25:55.836914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.390 [2024-11-29 10:25:55.836963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.390 [2024-11-29 10:25:55.836972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:16.390 [2024-11-29 10:25:55.836985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:16.390 [2024-11-29 10:25:55.836993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.390 [2024-11-29 10:25:55.837012] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:16.390 [2024-11-29 10:25:55.837032] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:16.391 [2024-11-29 10:25:55.837073] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:16.391 [2024-11-29 10:25:55.837091] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:16.391 [2024-11-29 10:25:55.837196] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:16.391 [2024-11-29 10:25:55.837207] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:16.391 [2024-11-29 10:25:55.837218] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:16.391 [2024-11-29 10:25:55.837231] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:16.391 [2024-11-29 10:25:55.837241] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:16.391 [2024-11-29 10:25:55.837253] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:16.391 [2024-11-29 10:25:55.837261] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:16.391 [2024-11-29 10:25:55.837268] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:16.391 [2024-11-29 10:25:55.837278] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:16.391 [2024-11-29 10:25:55.837289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.391 [2024-11-29 10:25:55.837297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:16.391 [2024-11-29 10:25:55.837306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:20:16.391 [2024-11-29 10:25:55.837314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.391 [2024-11-29 10:25:55.837402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.391 [2024-11-29 10:25:55.837412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:16.391 [2024-11-29 10:25:55.837422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:16.391 [2024-11-29 10:25:55.837430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.391 [2024-11-29 10:25:55.837540] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:16.391 [2024-11-29 10:25:55.837557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:16.391 [2024-11-29 10:25:55.837567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.391 [2024-11-29 10:25:55.837576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.391 [2024-11-29 10:25:55.837585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:16.391 [2024-11-29 10:25:55.837592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:16.391 [2024-11-29 10:25:55.837601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:16.391 [2024-11-29 10:25:55.837611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:16.391 [2024-11-29 10:25:55.837620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:16.391 [2024-11-29 10:25:55.837628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.391 [2024-11-29 10:25:55.837637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:16.391 [2024-11-29 10:25:55.837644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:16.391 [2024-11-29 10:25:55.837652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.391 [2024-11-29 10:25:55.837660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:16.391 [2024-11-29 10:25:55.837668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:16.391 [2024-11-29 10:25:55.837676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.391 [2024-11-29 10:25:55.837684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:16.391 [2024-11-29 10:25:55.837692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:16.391 [2024-11-29 10:25:55.837699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.391 [2024-11-29 10:25:55.837708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:16.391 [2024-11-29 10:25:55.837716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:16.391 [2024-11-29 10:25:55.837726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.391 [2024-11-29 10:25:55.837734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:16.391 [2024-11-29 10:25:55.837748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:16.391 [2024-11-29 10:25:55.837757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.391 [2024-11-29 10:25:55.837764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:16.391 [2024-11-29 10:25:55.837772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:16.391 [2024-11-29 10:25:55.837779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.391 [2024-11-29 10:25:55.837786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:16.391 [2024-11-29 10:25:55.837794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:16.391 [2024-11-29 10:25:55.837816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.391 [2024-11-29 10:25:55.837823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:16.391 [2024-11-29 10:25:55.837830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:16.391 [2024-11-29 10:25:55.837837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.391 [2024-11-29 10:25:55.837843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:16.391 [2024-11-29 10:25:55.837849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:16.391 [2024-11-29 10:25:55.837856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.391 [2024-11-29 10:25:55.837864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:16.391 [2024-11-29 10:25:55.837871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:16.391 [2024-11-29 10:25:55.837879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.391 [2024-11-29 10:25:55.837887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:16.391 [2024-11-29 10:25:55.837893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:16.391 [2024-11-29 10:25:55.837900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.391 [2024-11-29 10:25:55.837907] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:16.391 [2024-11-29 10:25:55.837919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:16.391 [2024-11-29 10:25:55.837927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.391 [2024-11-29 10:25:55.837935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.391 [2024-11-29 10:25:55.837946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:16.391 [2024-11-29 10:25:55.837953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:16.391 [2024-11-29 10:25:55.837960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:16.391 [2024-11-29 10:25:55.837968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:16.391 [2024-11-29 10:25:55.837974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:16.391 [2024-11-29 10:25:55.837980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:16.391 [2024-11-29 10:25:55.837991] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:16.391 [2024-11-29 10:25:55.838000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:16.391 [2024-11-29 10:25:55.838011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:16.391 [2024-11-29 10:25:55.838018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:16.391 [2024-11-29 10:25:55.838026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:16.391 [2024-11-29 10:25:55.838032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:16.391 [2024-11-29 10:25:55.838040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:16.391 [2024-11-29 10:25:55.838047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:16.391 [2024-11-29 10:25:55.838054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:16.391 [2024-11-29 10:25:55.838067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:16.391 [2024-11-29 10:25:55.838074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:16.391 [2024-11-29 10:25:55.838081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:16.391 [2024-11-29 10:25:55.838088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:16.391 [2024-11-29 10:25:55.838095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:16.391 [2024-11-29 10:25:55.838102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:16.391 [2024-11-29 10:25:55.838109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:16.391 [2024-11-29 10:25:55.838132] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:16.391 [2024-11-29 10:25:55.838143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:16.391 [2024-11-29 10:25:55.838154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:16.391 [2024-11-29 10:25:55.838162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:16.391 [2024-11-29 10:25:55.838170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:16.391 [2024-11-29 10:25:55.838178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:16.391 [2024-11-29 10:25:55.838185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.391 [2024-11-29 10:25:55.838193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:16.391 [2024-11-29 10:25:55.838204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:20:16.392 [2024-11-29 10:25:55.838212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.392 [2024-11-29 10:25:55.852125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.392 [2024-11-29 10:25:55.852298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.392 [2024-11-29 10:25:55.852316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.861 ms 00:20:16.392 [2024-11-29 10:25:55.852326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.392 [2024-11-29 10:25:55.852460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.392 [2024-11-29 10:25:55.852477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:16.392 [2024-11-29 10:25:55.852486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:16.392 [2024-11-29 10:25:55.852494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.654 [2024-11-29 10:25:55.879117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.654 [2024-11-29 10:25:55.879313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.654 [2024-11-29 10:25:55.879333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.597 ms 00:20:16.654 [2024-11-29 10:25:55.879348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.654 [2024-11-29 10:25:55.879453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.654 [2024-11-29 10:25:55.879465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.654 [2024-11-29 10:25:55.879475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:16.654 [2024-11-29 10:25:55.879486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.654 [2024-11-29 10:25:55.880026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.654 [2024-11-29 10:25:55.880062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.654 [2024-11-29 10:25:55.880075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:20:16.654 [2024-11-29 10:25:55.880085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.654 [2024-11-29 10:25:55.880240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.654 [2024-11-29 10:25:55.880261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.654 [2024-11-29 10:25:55.880270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:20:16.655 [2024-11-29 10:25:55.880280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.655 [2024-11-29 10:25:55.888000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.655 [2024-11-29 10:25:55.888040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:16.655 [2024-11-29 10:25:55.888057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.696 ms 00:20:16.655 [2024-11-29 10:25:55.888065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.655 [2024-11-29 10:25:55.891423] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:16.655 [2024-11-29 10:25:55.891471] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:16.655 [2024-11-29 10:25:55.891484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.655 [2024-11-29 10:25:55.891493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:16.655 [2024-11-29 10:25:55.891502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.314 ms 00:20:16.655 [2024-11-29 10:25:55.891509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.655 [2024-11-29 10:25:55.907258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.655 [2024-11-29 10:25:55.907315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:16.655 [2024-11-29 10:25:55.907331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.686 ms 00:20:16.655 [2024-11-29 10:25:55.907339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.655 [2024-11-29 10:25:55.910383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.655 [2024-11-29 10:25:55.910502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:16.655 [2024-11-29 10:25:55.910516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.909 ms 00:20:16.655 [2024-11-29 10:25:55.910524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.655 [2024-11-29 10:25:55.912417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.655 [2024-11-29 10:25:55.912447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:16.655 [2024-11-29 10:25:55.912456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.851 ms 00:20:16.655 [2024-11-29 10:25:55.912463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.655 [2024-11-29 10:25:55.912780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.655 [2024-11-29 10:25:55.912791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:16.655 [2024-11-29 10:25:55.912820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:20:16.655 [2024-11-29 10:25:55.912828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.655 [2024-11-29 10:25:55.929336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.655 [2024-11-29 10:25:55.929497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:16.655 [2024-11-29 10:25:55.929517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.486 ms 00:20:16.655 [2024-11-29 10:25:55.929527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.655 [2024-11-29 10:25:55.937077] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:16.655 [2024-11-29 10:25:55.951445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.655 [2024-11-29 10:25:55.951489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:16.655 [2024-11-29 10:25:55.951501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.846 ms 00:20:16.655 [2024-11-29 10:25:55.951509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.655 [2024-11-29 10:25:55.951580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.655 [2024-11-29 10:25:55.951591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:16.655 [2024-11-29 10:25:55.951602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:16.655 [2024-11-29 10:25:55.951610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.655 [2024-11-29 10:25:55.951659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.655 [2024-11-29 10:25:55.951668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:16.655 [2024-11-29 10:25:55.951676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:16.655 [2024-11-29 10:25:55.951683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.655 [2024-11-29 10:25:55.951707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.655 [2024-11-29 10:25:55.951715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:16.655 [2024-11-29 10:25:55.951723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:16.655 [2024-11-29 10:25:55.951732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.655 [2024-11-29 10:25:55.951763] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:16.655 [2024-11-29 10:25:55.951772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.655 [2024-11-29 10:25:55.951779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:16.655 [2024-11-29 10:25:55.951787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:16.655 [2024-11-29 10:25:55.951795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.655 [2024-11-29 10:25:55.956013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.655 [2024-11-29 10:25:55.956053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:16.655 [2024-11-29 10:25:55.956062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.175 ms 00:20:16.655 [2024-11-29 10:25:55.956069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.655 [2024-11-29 10:25:55.956163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.655 [2024-11-29 10:25:55.956173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:16.655 [2024-11-29 10:25:55.956182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:16.655 [2024-11-29 10:25:55.956189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.655 [2024-11-29 10:25:55.956982] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:16.655 [2024-11-29 10:25:55.958021] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 141.465 ms, result 0 00:20:16.655 [2024-11-29 10:25:55.959354] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:16.655 [2024-11-29 10:25:55.968532] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.602  [2024-11-29T10:25:58.457Z] Copying: 14/256 [MB] (14 MBps) [2024-11-29T10:25:59.028Z] Copying: 29/256 [MB] (14 MBps) [2024-11-29T10:26:00.416Z] Copying: 52/256 [MB] (23 MBps) [2024-11-29T10:26:01.362Z] Copying: 65/256 [MB] (12 MBps) [2024-11-29T10:26:02.305Z] Copying: 85/256 [MB] (20 MBps) [2024-11-29T10:26:03.248Z] Copying: 106/256 [MB] (21 MBps) [2024-11-29T10:26:04.187Z] Copying: 126/256 [MB] (20 MBps) [2024-11-29T10:26:05.124Z] Copying: 148/256 [MB] (21 MBps) [2024-11-29T10:26:06.064Z] Copying: 174/256 [MB] (25 MBps) [2024-11-29T10:26:07.448Z] Copying: 191/256 [MB] (17 MBps) [2024-11-29T10:26:08.392Z] Copying: 214/256 [MB] (22 MBps) [2024-11-29T10:26:08.967Z] Copying: 235/256 [MB] (21 MBps) [2024-11-29T10:26:09.230Z] Copying: 256/256 [MB] (average 19 MBps)[2024-11-29 10:26:09.147359] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:29.765 [2024-11-29 10:26:09.149942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.765 [2024-11-29 10:26:09.150239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:29.765 [2024-11-29 10:26:09.150419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:29.765 [2024-11-29 10:26:09.150492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.765 [2024-11-29 10:26:09.151203] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:29.765 [2024-11-29 10:26:09.152023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.765 [2024-11-29 10:26:09.152096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:29.765 [2024-11-29 10:26:09.152129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:20:29.765 [2024-11-29 10:26:09.152154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.765 [2024-11-29 10:26:09.152986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.765 [2024-11-29 10:26:09.153037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:29.765 [2024-11-29 10:26:09.153070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:20:29.765 [2024-11-29 10:26:09.153093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.765 [2024-11-29 10:26:09.157594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.765 [2024-11-29 10:26:09.157613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:29.765 [2024-11-29 10:26:09.157623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.457 ms 00:20:29.765 [2024-11-29 10:26:09.157631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.765 [2024-11-29 10:26:09.164622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.765 [2024-11-29 10:26:09.164648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:29.765 [2024-11-29 10:26:09.164658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.953 ms 00:20:29.765 [2024-11-29 10:26:09.164673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.765 [2024-11-29 10:26:09.167051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.765 [2024-11-29 10:26:09.167168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:29.765 [2024-11-29 10:26:09.167182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.337 ms 00:20:29.765 [2024-11-29 10:26:09.167189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.765 [2024-11-29 10:26:09.170575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.765 [2024-11-29 10:26:09.170686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:29.765 [2024-11-29 10:26:09.170700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.354 ms 00:20:29.765 [2024-11-29 10:26:09.170714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.765 [2024-11-29 10:26:09.170845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.765 [2024-11-29 10:26:09.170855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:29.765 [2024-11-29 10:26:09.170867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:29.765 [2024-11-29 10:26:09.170874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.765 [2024-11-29 10:26:09.173210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.765 [2024-11-29 10:26:09.173239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:29.765 [2024-11-29 10:26:09.173248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.319 ms 00:20:29.765 [2024-11-29 10:26:09.173254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.765 [2024-11-29 10:26:09.175313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.765 [2024-11-29 10:26:09.175342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:29.765 [2024-11-29 10:26:09.175350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.030 ms 00:20:29.765 [2024-11-29 10:26:09.175356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.765 [2024-11-29 10:26:09.177062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.765 [2024-11-29 10:26:09.177166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:29.765 [2024-11-29 10:26:09.177179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:20:29.765 [2024-11-29 10:26:09.177186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.765 [2024-11-29 10:26:09.178986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.765 [2024-11-29 10:26:09.179015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:29.765 [2024-11-29 10:26:09.179023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.746 ms 00:20:29.765 [2024-11-29 10:26:09.179030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.765 [2024-11-29 10:26:09.179060] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:29.765 [2024-11-29 10:26:09.179074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:29.765 [2024-11-29 10:26:09.179218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:29.766 [2024-11-29 10:26:09.179826] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:29.766 [2024-11-29 10:26:09.179833] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: db1a25ec-2b75-4dc5-810a-a3dfa0443739 00:20:29.766 [2024-11-29 10:26:09.179842] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:29.766 [2024-11-29 10:26:09.179849] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:29.766 [2024-11-29 10:26:09.179856] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:29.766 [2024-11-29 10:26:09.179863] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:29.766 [2024-11-29 10:26:09.179870] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:29.766 [2024-11-29 10:26:09.179882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:29.766 [2024-11-29 10:26:09.179890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:29.766 [2024-11-29 10:26:09.179896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:29.766 [2024-11-29 10:26:09.179902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:29.766 [2024-11-29 10:26:09.179909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.766 [2024-11-29 10:26:09.179916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:29.766 [2024-11-29 10:26:09.179925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:20:29.767 [2024-11-29 10:26:09.179932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.767 [2024-11-29 10:26:09.181547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.767 [2024-11-29 10:26:09.181588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:29.767 [2024-11-29 10:26:09.181609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.600 ms 00:20:29.767 [2024-11-29 10:26:09.181637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.767 [2024-11-29 10:26:09.181726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.767 [2024-11-29 10:26:09.181917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:29.767 [2024-11-29 10:26:09.181951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:29.767 [2024-11-29 10:26:09.181969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.767 [2024-11-29 10:26:09.187071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.767 [2024-11-29 10:26:09.187170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.767 [2024-11-29 10:26:09.187216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.767 [2024-11-29 10:26:09.187243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.767 [2024-11-29 10:26:09.187303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.767 [2024-11-29 10:26:09.187324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.767 [2024-11-29 10:26:09.187351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.767 [2024-11-29 10:26:09.187372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.767 [2024-11-29 10:26:09.187424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.767 [2024-11-29 10:26:09.187556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.767 [2024-11-29 10:26:09.187580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.767 [2024-11-29 10:26:09.187598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.767 [2024-11-29 10:26:09.187630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.767 [2024-11-29 10:26:09.187650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.767 [2024-11-29 10:26:09.187668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.767 [2024-11-29 10:26:09.187727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.767 [2024-11-29 10:26:09.196665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.767 [2024-11-29 10:26:09.196808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.767 [2024-11-29 10:26:09.196858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.767 [2024-11-29 10:26:09.196885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.767 [2024-11-29 10:26:09.204034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.767 [2024-11-29 10:26:09.204162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.767 [2024-11-29 10:26:09.204210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.767 [2024-11-29 10:26:09.204232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.767 [2024-11-29 10:26:09.204273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.767 [2024-11-29 10:26:09.204294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.767 [2024-11-29 10:26:09.204313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.767 [2024-11-29 10:26:09.204331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.767 [2024-11-29 10:26:09.204371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.767 [2024-11-29 10:26:09.204396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.767 [2024-11-29 10:26:09.204458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.767 [2024-11-29 10:26:09.204481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.767 [2024-11-29 10:26:09.204561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.767 [2024-11-29 10:26:09.204609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.767 [2024-11-29 10:26:09.204641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.767 [2024-11-29 10:26:09.204659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.767 [2024-11-29 10:26:09.204726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.767 [2024-11-29 10:26:09.204866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:29.767 [2024-11-29 10:26:09.204921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.767 [2024-11-29 10:26:09.204950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.767 [2024-11-29 10:26:09.205031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.767 [2024-11-29 10:26:09.205079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.767 [2024-11-29 10:26:09.205124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.767 [2024-11-29 10:26:09.205145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.767 [2024-11-29 10:26:09.205203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.767 [2024-11-29 10:26:09.205231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.767 [2024-11-29 10:26:09.205250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.767 [2024-11-29 10:26:09.205269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.767 [2024-11-29 10:26:09.205414] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.473 ms, result 0 00:20:30.027 00:20:30.027 00:20:30.027 10:26:09 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:30.601 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:30.601 10:26:10 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:30.601 10:26:10 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:30.601 10:26:10 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:30.601 10:26:10 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:30.601 10:26:10 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:30.601 10:26:10 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:30.863 Process with pid 87911 is not found 00:20:30.863 10:26:10 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 87911 00:20:30.863 10:26:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87911 ']' 00:20:30.863 10:26:10 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87911 00:20:30.863 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87911) - No such process 00:20:30.863 10:26:10 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 87911 is not found' 00:20:30.863 00:20:30.863 real 1m9.367s 00:20:30.863 user 1m32.286s 00:20:30.863 sys 0m5.149s 00:20:30.863 10:26:10 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.863 10:26:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:30.863 ************************************ 00:20:30.863 END TEST ftl_trim 00:20:30.863 ************************************ 00:20:30.863 10:26:10 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:30.863 10:26:10 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:30.863 10:26:10 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:30.863 10:26:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:30.863 ************************************ 00:20:30.863 START TEST ftl_restore 00:20:30.863 ************************************ 00:20:30.863 10:26:10 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:30.863 * Looking for test storage... 00:20:30.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:30.863 10:26:10 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:30.863 10:26:10 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:20:30.864 10:26:10 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:30.864 10:26:10 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.864 10:26:10 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:30.864 10:26:10 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.864 10:26:10 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:30.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.864 --rc genhtml_branch_coverage=1 00:20:30.864 --rc genhtml_function_coverage=1 00:20:30.864 --rc genhtml_legend=1 00:20:30.864 --rc geninfo_all_blocks=1 00:20:30.864 --rc geninfo_unexecuted_blocks=1 00:20:30.864 00:20:30.864 ' 00:20:30.864 10:26:10 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:30.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.864 --rc genhtml_branch_coverage=1 00:20:30.864 --rc genhtml_function_coverage=1 00:20:30.864 --rc genhtml_legend=1 00:20:30.864 --rc geninfo_all_blocks=1 00:20:30.864 --rc geninfo_unexecuted_blocks=1 00:20:30.864 00:20:30.864 ' 00:20:30.864 10:26:10 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:30.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.864 --rc genhtml_branch_coverage=1 00:20:30.864 --rc genhtml_function_coverage=1 00:20:30.864 --rc genhtml_legend=1 00:20:30.864 --rc geninfo_all_blocks=1 00:20:30.864 --rc geninfo_unexecuted_blocks=1 00:20:30.864 00:20:30.864 ' 00:20:30.864 10:26:10 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:30.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.864 --rc genhtml_branch_coverage=1 00:20:30.864 --rc genhtml_function_coverage=1 00:20:30.864 --rc genhtml_legend=1 00:20:30.864 --rc geninfo_all_blocks=1 00:20:30.864 --rc geninfo_unexecuted_blocks=1 00:20:30.864 00:20:30.864 ' 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:30.864 10:26:10 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:31.125 10:26:10 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:31.125 10:26:10 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:31.125 10:26:10 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:31.125 10:26:10 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:31.125 10:26:10 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:31.125 10:26:10 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:31.125 10:26:10 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:31.125 10:26:10 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:31.125 10:26:10 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:31.125 10:26:10 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:31.125 10:26:10 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:31.126 10:26:10 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:31.126 10:26:10 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:31.126 10:26:10 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.0xQryBbvuD 00:20:31.126 10:26:10 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:31.126 10:26:10 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:31.126 10:26:10 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:31.126 10:26:10 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:31.126 10:26:10 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:31.126 10:26:10 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:31.126 10:26:10 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:31.126 10:26:10 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:31.126 10:26:10 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=88167 00:20:31.126 10:26:10 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 88167 00:20:31.126 10:26:10 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 88167 ']' 00:20:31.126 10:26:10 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.126 10:26:10 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:31.126 10:26:10 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.126 10:26:10 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.126 10:26:10 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.126 10:26:10 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:31.126 [2024-11-29 10:26:10.410600] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:20:31.126 [2024-11-29 10:26:10.410725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88167 ] 00:20:31.126 [2024-11-29 10:26:10.555336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.126 [2024-11-29 10:26:10.574837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.070 10:26:11 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.070 10:26:11 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:32.070 10:26:11 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:32.070 10:26:11 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:32.070 10:26:11 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:32.070 10:26:11 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:32.070 10:26:11 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:32.070 10:26:11 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:32.070 10:26:11 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:32.070 10:26:11 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:32.070 10:26:11 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:32.070 10:26:11 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:32.070 10:26:11 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:32.070 10:26:11 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:32.070 10:26:11 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:32.070 10:26:11 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:32.332 10:26:11 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:32.332 { 00:20:32.332 "name": "nvme0n1", 00:20:32.332 "aliases": [ 00:20:32.332 "9b247aab-237f-427e-a284-3dac7da09844" 00:20:32.332 ], 00:20:32.332 "product_name": "NVMe disk", 00:20:32.332 "block_size": 4096, 00:20:32.332 "num_blocks": 1310720, 00:20:32.332 "uuid": "9b247aab-237f-427e-a284-3dac7da09844", 00:20:32.332 "numa_id": -1, 00:20:32.332 "assigned_rate_limits": { 00:20:32.332 "rw_ios_per_sec": 0, 00:20:32.332 "rw_mbytes_per_sec": 0, 00:20:32.332 "r_mbytes_per_sec": 0, 00:20:32.332 "w_mbytes_per_sec": 0 00:20:32.332 }, 00:20:32.332 "claimed": true, 00:20:32.332 "claim_type": "read_many_write_one", 00:20:32.332 "zoned": false, 00:20:32.332 "supported_io_types": { 00:20:32.332 "read": true, 00:20:32.332 "write": true, 00:20:32.332 "unmap": true, 00:20:32.332 "flush": true, 00:20:32.332 "reset": true, 00:20:32.332 "nvme_admin": true, 00:20:32.332 "nvme_io": true, 00:20:32.332 "nvme_io_md": false, 00:20:32.332 "write_zeroes": true, 00:20:32.332 "zcopy": false, 00:20:32.332 "get_zone_info": false, 00:20:32.332 "zone_management": false, 00:20:32.332 "zone_append": false, 00:20:32.332 "compare": true, 00:20:32.332 "compare_and_write": false, 00:20:32.332 "abort": true, 00:20:32.332 "seek_hole": false, 00:20:32.332 "seek_data": false, 00:20:32.332 "copy": true, 00:20:32.332 "nvme_iov_md": false 00:20:32.332 }, 00:20:32.332 "driver_specific": { 00:20:32.332 "nvme": [ 00:20:32.332 { 00:20:32.332 "pci_address": "0000:00:11.0", 00:20:32.332 "trid": { 00:20:32.332 "trtype": "PCIe", 00:20:32.332 "traddr": "0000:00:11.0" 00:20:32.332 }, 00:20:32.332 "ctrlr_data": { 00:20:32.332 "cntlid": 0, 00:20:32.332 "vendor_id": "0x1b36", 00:20:32.332 "model_number": "QEMU NVMe Ctrl", 00:20:32.332 "serial_number": "12341", 00:20:32.332 "firmware_revision": "8.0.0", 00:20:32.332 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:32.332 "oacs": { 00:20:32.332 "security": 0, 00:20:32.332 "format": 1, 00:20:32.332 "firmware": 0, 00:20:32.332 "ns_manage": 1 00:20:32.332 }, 00:20:32.332 "multi_ctrlr": false, 00:20:32.332 "ana_reporting": false 00:20:32.332 }, 00:20:32.332 "vs": { 00:20:32.332 "nvme_version": "1.4" 00:20:32.332 }, 00:20:32.332 "ns_data": { 00:20:32.332 "id": 1, 00:20:32.332 "can_share": false 00:20:32.332 } 00:20:32.332 } 00:20:32.332 ], 00:20:32.332 "mp_policy": "active_passive" 00:20:32.332 } 00:20:32.332 } 00:20:32.332 ]' 00:20:32.332 10:26:11 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:32.332 10:26:11 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:32.332 10:26:11 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:32.593 10:26:11 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:32.593 10:26:11 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:32.593 10:26:11 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:32.594 10:26:11 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:32.594 10:26:11 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:32.594 10:26:11 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:32.594 10:26:11 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:32.594 10:26:11 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:32.594 10:26:12 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=922b0a90-50be-4185-8699-ef5f30654eb5 00:20:32.594 10:26:12 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:32.594 10:26:12 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 922b0a90-50be-4185-8699-ef5f30654eb5 00:20:32.855 10:26:12 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:33.116 10:26:12 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=e20adc4a-8534-4c36-af50-0f3661835787 00:20:33.116 10:26:12 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e20adc4a-8534-4c36-af50-0f3661835787 00:20:33.378 10:26:12 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=9a421cb2-cbf2-4235-a444-518e93fdd757 00:20:33.378 10:26:12 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:33.378 10:26:12 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9a421cb2-cbf2-4235-a444-518e93fdd757 00:20:33.378 10:26:12 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:33.378 10:26:12 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:33.378 10:26:12 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=9a421cb2-cbf2-4235-a444-518e93fdd757 00:20:33.378 10:26:12 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:33.378 10:26:12 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 9a421cb2-cbf2-4235-a444-518e93fdd757 00:20:33.378 10:26:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=9a421cb2-cbf2-4235-a444-518e93fdd757 00:20:33.378 10:26:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:33.378 10:26:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:33.378 10:26:12 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:33.378 10:26:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a421cb2-cbf2-4235-a444-518e93fdd757 00:20:33.640 10:26:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:33.640 { 00:20:33.640 "name": "9a421cb2-cbf2-4235-a444-518e93fdd757", 00:20:33.640 "aliases": [ 00:20:33.640 "lvs/nvme0n1p0" 00:20:33.640 ], 00:20:33.640 "product_name": "Logical Volume", 00:20:33.640 "block_size": 4096, 00:20:33.640 "num_blocks": 26476544, 00:20:33.640 "uuid": "9a421cb2-cbf2-4235-a444-518e93fdd757", 00:20:33.640 "assigned_rate_limits": { 00:20:33.640 "rw_ios_per_sec": 0, 00:20:33.640 "rw_mbytes_per_sec": 0, 00:20:33.640 "r_mbytes_per_sec": 0, 00:20:33.640 "w_mbytes_per_sec": 0 00:20:33.640 }, 00:20:33.640 "claimed": false, 00:20:33.640 "zoned": false, 00:20:33.640 "supported_io_types": { 00:20:33.640 "read": true, 00:20:33.640 "write": true, 00:20:33.640 "unmap": true, 00:20:33.640 "flush": false, 00:20:33.640 "reset": true, 00:20:33.640 "nvme_admin": false, 00:20:33.640 "nvme_io": false, 00:20:33.640 "nvme_io_md": false, 00:20:33.640 "write_zeroes": true, 00:20:33.640 "zcopy": false, 00:20:33.640 "get_zone_info": false, 00:20:33.640 "zone_management": false, 00:20:33.640 "zone_append": false, 00:20:33.640 "compare": false, 00:20:33.640 "compare_and_write": false, 00:20:33.640 "abort": false, 00:20:33.640 "seek_hole": true, 00:20:33.640 "seek_data": true, 00:20:33.640 "copy": false, 00:20:33.640 "nvme_iov_md": false 00:20:33.640 }, 00:20:33.640 "driver_specific": { 00:20:33.640 "lvol": { 00:20:33.640 "lvol_store_uuid": "e20adc4a-8534-4c36-af50-0f3661835787", 00:20:33.640 "base_bdev": "nvme0n1", 00:20:33.640 "thin_provision": true, 00:20:33.640 "num_allocated_clusters": 0, 00:20:33.640 "snapshot": false, 00:20:33.640 "clone": false, 00:20:33.640 "esnap_clone": false 00:20:33.640 } 00:20:33.640 } 00:20:33.640 } 00:20:33.640 ]' 00:20:33.640 10:26:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:33.640 10:26:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:33.640 10:26:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:33.640 10:26:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:33.640 10:26:12 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:33.640 10:26:12 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:33.640 10:26:12 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:33.640 10:26:12 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:33.640 10:26:12 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:33.901 10:26:13 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:33.901 10:26:13 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:33.901 10:26:13 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 9a421cb2-cbf2-4235-a444-518e93fdd757 00:20:33.901 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=9a421cb2-cbf2-4235-a444-518e93fdd757 00:20:33.901 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:33.901 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:33.901 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:33.901 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a421cb2-cbf2-4235-a444-518e93fdd757 00:20:34.164 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:34.164 { 00:20:34.164 "name": "9a421cb2-cbf2-4235-a444-518e93fdd757", 00:20:34.164 "aliases": [ 00:20:34.164 "lvs/nvme0n1p0" 00:20:34.164 ], 00:20:34.164 "product_name": "Logical Volume", 00:20:34.164 "block_size": 4096, 00:20:34.164 "num_blocks": 26476544, 00:20:34.164 "uuid": "9a421cb2-cbf2-4235-a444-518e93fdd757", 00:20:34.164 "assigned_rate_limits": { 00:20:34.164 "rw_ios_per_sec": 0, 00:20:34.164 "rw_mbytes_per_sec": 0, 00:20:34.164 "r_mbytes_per_sec": 0, 00:20:34.164 "w_mbytes_per_sec": 0 00:20:34.164 }, 00:20:34.164 "claimed": false, 00:20:34.164 "zoned": false, 00:20:34.164 "supported_io_types": { 00:20:34.164 "read": true, 00:20:34.164 "write": true, 00:20:34.164 "unmap": true, 00:20:34.164 "flush": false, 00:20:34.164 "reset": true, 00:20:34.164 "nvme_admin": false, 00:20:34.164 "nvme_io": false, 00:20:34.164 "nvme_io_md": false, 00:20:34.164 "write_zeroes": true, 00:20:34.164 "zcopy": false, 00:20:34.164 "get_zone_info": false, 00:20:34.164 "zone_management": false, 00:20:34.164 "zone_append": false, 00:20:34.164 "compare": false, 00:20:34.164 "compare_and_write": false, 00:20:34.164 "abort": false, 00:20:34.164 "seek_hole": true, 00:20:34.164 "seek_data": true, 00:20:34.164 "copy": false, 00:20:34.164 "nvme_iov_md": false 00:20:34.164 }, 00:20:34.164 "driver_specific": { 00:20:34.164 "lvol": { 00:20:34.164 "lvol_store_uuid": "e20adc4a-8534-4c36-af50-0f3661835787", 00:20:34.164 "base_bdev": "nvme0n1", 00:20:34.164 "thin_provision": true, 00:20:34.164 "num_allocated_clusters": 0, 00:20:34.164 "snapshot": false, 00:20:34.164 "clone": false, 00:20:34.164 "esnap_clone": false 00:20:34.164 } 00:20:34.164 } 00:20:34.164 } 00:20:34.164 ]' 00:20:34.164 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:34.164 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:34.164 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:34.164 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:34.164 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:34.164 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:34.164 10:26:13 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:34.164 10:26:13 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:34.425 10:26:13 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:34.425 10:26:13 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 9a421cb2-cbf2-4235-a444-518e93fdd757 00:20:34.425 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=9a421cb2-cbf2-4235-a444-518e93fdd757 00:20:34.425 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:34.425 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:34.425 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:34.425 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a421cb2-cbf2-4235-a444-518e93fdd757 00:20:34.684 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:34.684 { 00:20:34.684 "name": "9a421cb2-cbf2-4235-a444-518e93fdd757", 00:20:34.684 "aliases": [ 00:20:34.684 "lvs/nvme0n1p0" 00:20:34.684 ], 00:20:34.684 "product_name": "Logical Volume", 00:20:34.684 "block_size": 4096, 00:20:34.684 "num_blocks": 26476544, 00:20:34.684 "uuid": "9a421cb2-cbf2-4235-a444-518e93fdd757", 00:20:34.684 "assigned_rate_limits": { 00:20:34.684 "rw_ios_per_sec": 0, 00:20:34.684 "rw_mbytes_per_sec": 0, 00:20:34.684 "r_mbytes_per_sec": 0, 00:20:34.684 "w_mbytes_per_sec": 0 00:20:34.684 }, 00:20:34.684 "claimed": false, 00:20:34.684 "zoned": false, 00:20:34.684 "supported_io_types": { 00:20:34.684 "read": true, 00:20:34.684 "write": true, 00:20:34.684 "unmap": true, 00:20:34.684 "flush": false, 00:20:34.684 "reset": true, 00:20:34.684 "nvme_admin": false, 00:20:34.684 "nvme_io": false, 00:20:34.684 "nvme_io_md": false, 00:20:34.684 "write_zeroes": true, 00:20:34.684 "zcopy": false, 00:20:34.684 "get_zone_info": false, 00:20:34.684 "zone_management": false, 00:20:34.684 "zone_append": false, 00:20:34.684 "compare": false, 00:20:34.684 "compare_and_write": false, 00:20:34.684 "abort": false, 00:20:34.684 "seek_hole": true, 00:20:34.684 "seek_data": true, 00:20:34.684 "copy": false, 00:20:34.684 "nvme_iov_md": false 00:20:34.684 }, 00:20:34.684 "driver_specific": { 00:20:34.684 "lvol": { 00:20:34.684 "lvol_store_uuid": "e20adc4a-8534-4c36-af50-0f3661835787", 00:20:34.684 "base_bdev": "nvme0n1", 00:20:34.684 "thin_provision": true, 00:20:34.684 "num_allocated_clusters": 0, 00:20:34.684 "snapshot": false, 00:20:34.684 "clone": false, 00:20:34.684 "esnap_clone": false 00:20:34.684 } 00:20:34.684 } 00:20:34.684 } 00:20:34.684 ]' 00:20:34.684 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:34.684 10:26:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:34.684 10:26:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:34.684 10:26:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:34.684 10:26:14 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:34.684 10:26:14 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:34.684 10:26:14 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:34.684 10:26:14 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9a421cb2-cbf2-4235-a444-518e93fdd757 --l2p_dram_limit 10' 00:20:34.684 10:26:14 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:34.684 10:26:14 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:34.684 10:26:14 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:34.684 10:26:14 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:34.684 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:34.684 10:26:14 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9a421cb2-cbf2-4235-a444-518e93fdd757 --l2p_dram_limit 10 -c nvc0n1p0 00:20:34.945 [2024-11-29 10:26:14.224441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.945 [2024-11-29 10:26:14.224480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:34.945 [2024-11-29 10:26:14.224491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:34.945 [2024-11-29 10:26:14.224499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.945 [2024-11-29 10:26:14.224545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.945 [2024-11-29 10:26:14.224556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:34.945 [2024-11-29 10:26:14.224562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:34.945 [2024-11-29 10:26:14.224571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.945 [2024-11-29 10:26:14.224585] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:34.945 [2024-11-29 10:26:14.225051] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:34.945 [2024-11-29 10:26:14.225074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.945 [2024-11-29 10:26:14.225084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:34.945 [2024-11-29 10:26:14.225092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:20:34.945 [2024-11-29 10:26:14.225104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.945 [2024-11-29 10:26:14.225141] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b21615cc-511d-4102-af29-37fdb7e3e0e1 00:20:34.945 [2024-11-29 10:26:14.226070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.945 [2024-11-29 10:26:14.226087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:34.945 [2024-11-29 10:26:14.226096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:34.945 [2024-11-29 10:26:14.226102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.945 [2024-11-29 10:26:14.230688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.945 [2024-11-29 10:26:14.230714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:34.945 [2024-11-29 10:26:14.230724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.511 ms 00:20:34.945 [2024-11-29 10:26:14.230729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.945 [2024-11-29 10:26:14.230788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.945 [2024-11-29 10:26:14.230795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:34.945 [2024-11-29 10:26:14.230812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:34.945 [2024-11-29 10:26:14.230818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.945 [2024-11-29 10:26:14.230862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.945 [2024-11-29 10:26:14.230870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:34.945 [2024-11-29 10:26:14.230878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:34.945 [2024-11-29 10:26:14.230883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.945 [2024-11-29 10:26:14.230901] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:34.945 [2024-11-29 10:26:14.232108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.945 [2024-11-29 10:26:14.232133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:34.945 [2024-11-29 10:26:14.232140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.213 ms 00:20:34.945 [2024-11-29 10:26:14.232147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.945 [2024-11-29 10:26:14.232172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.945 [2024-11-29 10:26:14.232180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:34.945 [2024-11-29 10:26:14.232186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:34.945 [2024-11-29 10:26:14.232194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.945 [2024-11-29 10:26:14.232212] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:34.945 [2024-11-29 10:26:14.232322] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:34.945 [2024-11-29 10:26:14.232331] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:34.945 [2024-11-29 10:26:14.232340] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:34.945 [2024-11-29 10:26:14.232347] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:34.945 [2024-11-29 10:26:14.232357] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:34.945 [2024-11-29 10:26:14.232366] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:34.945 [2024-11-29 10:26:14.232376] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:34.945 [2024-11-29 10:26:14.232381] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:34.945 [2024-11-29 10:26:14.232388] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:34.945 [2024-11-29 10:26:14.232393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.945 [2024-11-29 10:26:14.232400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:34.945 [2024-11-29 10:26:14.232406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:20:34.945 [2024-11-29 10:26:14.232413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.945 [2024-11-29 10:26:14.232475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.945 [2024-11-29 10:26:14.232484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:34.945 [2024-11-29 10:26:14.232490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:34.945 [2024-11-29 10:26:14.232498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.945 [2024-11-29 10:26:14.232568] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:34.945 [2024-11-29 10:26:14.232577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:34.945 [2024-11-29 10:26:14.232583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:34.945 [2024-11-29 10:26:14.232590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.945 [2024-11-29 10:26:14.232595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:34.945 [2024-11-29 10:26:14.232601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:34.945 [2024-11-29 10:26:14.232606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:34.945 [2024-11-29 10:26:14.232613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:34.945 [2024-11-29 10:26:14.232618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:34.945 [2024-11-29 10:26:14.232624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:34.945 [2024-11-29 10:26:14.232629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:34.945 [2024-11-29 10:26:14.232636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:34.945 [2024-11-29 10:26:14.232641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:34.945 [2024-11-29 10:26:14.232649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:34.945 [2024-11-29 10:26:14.232654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:34.945 [2024-11-29 10:26:14.232662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.945 [2024-11-29 10:26:14.232667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:34.945 [2024-11-29 10:26:14.232673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:34.945 [2024-11-29 10:26:14.232678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.945 [2024-11-29 10:26:14.232684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:34.945 [2024-11-29 10:26:14.232690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:34.945 [2024-11-29 10:26:14.232696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:34.945 [2024-11-29 10:26:14.232701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:34.945 [2024-11-29 10:26:14.232707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:34.945 [2024-11-29 10:26:14.232712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:34.945 [2024-11-29 10:26:14.232719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:34.945 [2024-11-29 10:26:14.232723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:34.945 [2024-11-29 10:26:14.232730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:34.945 [2024-11-29 10:26:14.232735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:34.945 [2024-11-29 10:26:14.232743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:34.945 [2024-11-29 10:26:14.232748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:34.945 [2024-11-29 10:26:14.232754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:34.945 [2024-11-29 10:26:14.232760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:34.945 [2024-11-29 10:26:14.232767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:34.945 [2024-11-29 10:26:14.232773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:34.945 [2024-11-29 10:26:14.232780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:34.945 [2024-11-29 10:26:14.232785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:34.945 [2024-11-29 10:26:14.232792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:34.945 [2024-11-29 10:26:14.232798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:34.945 [2024-11-29 10:26:14.232815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.945 [2024-11-29 10:26:14.232820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:34.946 [2024-11-29 10:26:14.232828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:34.946 [2024-11-29 10:26:14.232833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.946 [2024-11-29 10:26:14.232839] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:34.946 [2024-11-29 10:26:14.232849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:34.946 [2024-11-29 10:26:14.232859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:34.946 [2024-11-29 10:26:14.232865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.946 [2024-11-29 10:26:14.232875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:34.946 [2024-11-29 10:26:14.232882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:34.946 [2024-11-29 10:26:14.232889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:34.946 [2024-11-29 10:26:14.232895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:34.946 [2024-11-29 10:26:14.232901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:34.946 [2024-11-29 10:26:14.232907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:34.946 [2024-11-29 10:26:14.232917] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:34.946 [2024-11-29 10:26:14.232925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:34.946 [2024-11-29 10:26:14.232933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:34.946 [2024-11-29 10:26:14.232939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:34.946 [2024-11-29 10:26:14.232947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:34.946 [2024-11-29 10:26:14.232953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:34.946 [2024-11-29 10:26:14.232960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:34.946 [2024-11-29 10:26:14.232966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:34.946 [2024-11-29 10:26:14.232976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:34.946 [2024-11-29 10:26:14.232983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:34.946 [2024-11-29 10:26:14.232990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:34.946 [2024-11-29 10:26:14.232996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:34.946 [2024-11-29 10:26:14.233003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:34.946 [2024-11-29 10:26:14.233009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:34.946 [2024-11-29 10:26:14.233017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:34.946 [2024-11-29 10:26:14.233023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:34.946 [2024-11-29 10:26:14.233030] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:34.946 [2024-11-29 10:26:14.233037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:34.946 [2024-11-29 10:26:14.233045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:34.946 [2024-11-29 10:26:14.233051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:34.946 [2024-11-29 10:26:14.233058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:34.946 [2024-11-29 10:26:14.233064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:34.946 [2024-11-29 10:26:14.233072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.946 [2024-11-29 10:26:14.233078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:34.946 [2024-11-29 10:26:14.233087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:20:34.946 [2024-11-29 10:26:14.233096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.946 [2024-11-29 10:26:14.233127] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:34.946 [2024-11-29 10:26:14.233134] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:39.152 [2024-11-29 10:26:17.991024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.152 [2024-11-29 10:26:17.991121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:39.152 [2024-11-29 10:26:17.991142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3757.872 ms 00:20:39.152 [2024-11-29 10:26:17.991152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.152 [2024-11-29 10:26:18.005091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.152 [2024-11-29 10:26:18.005149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.152 [2024-11-29 10:26:18.005168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.814 ms 00:20:39.152 [2024-11-29 10:26:18.005177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.152 [2024-11-29 10:26:18.005291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.152 [2024-11-29 10:26:18.005302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:39.152 [2024-11-29 10:26:18.005316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:39.152 [2024-11-29 10:26:18.005325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.152 [2024-11-29 10:26:18.018346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.152 [2024-11-29 10:26:18.018401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.152 [2024-11-29 10:26:18.018416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.956 ms 00:20:39.152 [2024-11-29 10:26:18.018428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.152 [2024-11-29 10:26:18.018464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.152 [2024-11-29 10:26:18.018472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.152 [2024-11-29 10:26:18.018484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:39.152 [2024-11-29 10:26:18.018492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.152 [2024-11-29 10:26:18.019108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.152 [2024-11-29 10:26:18.019148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.152 [2024-11-29 10:26:18.019163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:20:39.152 [2024-11-29 10:26:18.019178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.152 [2024-11-29 10:26:18.019316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.152 [2024-11-29 10:26:18.019327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.152 [2024-11-29 10:26:18.019339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:39.152 [2024-11-29 10:26:18.019348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.152 [2024-11-29 10:26:18.027926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.152 [2024-11-29 10:26:18.027969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.152 [2024-11-29 10:26:18.027982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.551 ms 00:20:39.152 [2024-11-29 10:26:18.027990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.152 [2024-11-29 10:26:18.048019] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:39.152 [2024-11-29 10:26:18.052131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.152 [2024-11-29 10:26:18.052184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:39.152 [2024-11-29 10:26:18.052199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.061 ms 00:20:39.152 [2024-11-29 10:26:18.052211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.152 [2024-11-29 10:26:18.133500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.152 [2024-11-29 10:26:18.133581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:39.152 [2024-11-29 10:26:18.133597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.238 ms 00:20:39.152 [2024-11-29 10:26:18.133611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.153 [2024-11-29 10:26:18.133825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.153 [2024-11-29 10:26:18.133839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:39.153 [2024-11-29 10:26:18.133849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:20:39.153 [2024-11-29 10:26:18.133859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.153 [2024-11-29 10:26:18.139403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.153 [2024-11-29 10:26:18.139465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:39.153 [2024-11-29 10:26:18.139476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.492 ms 00:20:39.153 [2024-11-29 10:26:18.139486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.153 [2024-11-29 10:26:18.144584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.153 [2024-11-29 10:26:18.144640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:39.153 [2024-11-29 10:26:18.144651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.069 ms 00:20:39.153 [2024-11-29 10:26:18.144661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.153 [2024-11-29 10:26:18.145021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.153 [2024-11-29 10:26:18.145034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:39.153 [2024-11-29 10:26:18.145044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:20:39.153 [2024-11-29 10:26:18.145056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.153 [2024-11-29 10:26:18.187086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.153 [2024-11-29 10:26:18.187151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:39.153 [2024-11-29 10:26:18.187164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.992 ms 00:20:39.153 [2024-11-29 10:26:18.187175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.153 [2024-11-29 10:26:18.194284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.153 [2024-11-29 10:26:18.194342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:39.153 [2024-11-29 10:26:18.194354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.047 ms 00:20:39.153 [2024-11-29 10:26:18.194364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.153 [2024-11-29 10:26:18.200098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.153 [2024-11-29 10:26:18.200151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:39.153 [2024-11-29 10:26:18.200161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.685 ms 00:20:39.153 [2024-11-29 10:26:18.200172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.153 [2024-11-29 10:26:18.206349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.153 [2024-11-29 10:26:18.206404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:39.153 [2024-11-29 10:26:18.206415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.129 ms 00:20:39.153 [2024-11-29 10:26:18.206428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.153 [2024-11-29 10:26:18.206479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.153 [2024-11-29 10:26:18.206492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:39.153 [2024-11-29 10:26:18.206502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:39.153 [2024-11-29 10:26:18.206513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.153 [2024-11-29 10:26:18.206603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.153 [2024-11-29 10:26:18.206616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:39.153 [2024-11-29 10:26:18.206629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:39.153 [2024-11-29 10:26:18.206639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.153 [2024-11-29 10:26:18.207841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3982.847 ms, result 0 00:20:39.153 { 00:20:39.153 "name": "ftl0", 00:20:39.153 "uuid": "b21615cc-511d-4102-af29-37fdb7e3e0e1" 00:20:39.153 } 00:20:39.153 10:26:18 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:39.153 10:26:18 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:39.153 10:26:18 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:39.153 10:26:18 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:39.423 [2024-11-29 10:26:18.655222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.423 [2024-11-29 10:26:18.655279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:39.423 [2024-11-29 10:26:18.655295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:39.423 [2024-11-29 10:26:18.655303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.423 [2024-11-29 10:26:18.655330] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:39.423 [2024-11-29 10:26:18.656097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.423 [2024-11-29 10:26:18.656149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:39.423 [2024-11-29 10:26:18.656160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:20:39.423 [2024-11-29 10:26:18.656172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.423 [2024-11-29 10:26:18.656440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.423 [2024-11-29 10:26:18.656461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:39.423 [2024-11-29 10:26:18.656471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:20:39.423 [2024-11-29 10:26:18.656482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.423 [2024-11-29 10:26:18.659760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.423 [2024-11-29 10:26:18.659790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:39.423 [2024-11-29 10:26:18.659809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.262 ms 00:20:39.423 [2024-11-29 10:26:18.659819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.423 [2024-11-29 10:26:18.666072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.423 [2024-11-29 10:26:18.666113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:39.423 [2024-11-29 10:26:18.666139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.234 ms 00:20:39.423 [2024-11-29 10:26:18.666149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.423 [2024-11-29 10:26:18.669010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.423 [2024-11-29 10:26:18.669067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:39.423 [2024-11-29 10:26:18.669077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.772 ms 00:20:39.423 [2024-11-29 10:26:18.669087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.423 [2024-11-29 10:26:18.675248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.423 [2024-11-29 10:26:18.675310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:39.423 [2024-11-29 10:26:18.675321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.112 ms 00:20:39.423 [2024-11-29 10:26:18.675331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.423 [2024-11-29 10:26:18.675468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.423 [2024-11-29 10:26:18.675490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:39.423 [2024-11-29 10:26:18.675499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:39.423 [2024-11-29 10:26:18.675509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.423 [2024-11-29 10:26:18.678684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.423 [2024-11-29 10:26:18.678741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:39.423 [2024-11-29 10:26:18.678751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.156 ms 00:20:39.423 [2024-11-29 10:26:18.678760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.423 [2024-11-29 10:26:18.681167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.423 [2024-11-29 10:26:18.681224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:39.423 [2024-11-29 10:26:18.681234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.347 ms 00:20:39.423 [2024-11-29 10:26:18.681245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.423 [2024-11-29 10:26:18.682956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.423 [2024-11-29 10:26:18.683010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:39.423 [2024-11-29 10:26:18.683020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 00:20:39.423 [2024-11-29 10:26:18.683029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.423 [2024-11-29 10:26:18.684551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.423 [2024-11-29 10:26:18.684609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:39.423 [2024-11-29 10:26:18.684619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.449 ms 00:20:39.423 [2024-11-29 10:26:18.684632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.423 [2024-11-29 10:26:18.684704] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:39.424 [2024-11-29 10:26:18.684722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.684995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:39.424 [2024-11-29 10:26:18.685377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:39.425 [2024-11-29 10:26:18.685641] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:39.425 [2024-11-29 10:26:18.685650] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b21615cc-511d-4102-af29-37fdb7e3e0e1 00:20:39.425 [2024-11-29 10:26:18.685661] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:39.425 [2024-11-29 10:26:18.685669] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:39.425 [2024-11-29 10:26:18.685678] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:39.425 [2024-11-29 10:26:18.685689] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:39.425 [2024-11-29 10:26:18.685699] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:39.425 [2024-11-29 10:26:18.685707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:39.425 [2024-11-29 10:26:18.685716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:39.425 [2024-11-29 10:26:18.685723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:39.425 [2024-11-29 10:26:18.685732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:39.425 [2024-11-29 10:26:18.685739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.425 [2024-11-29 10:26:18.685749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:39.425 [2024-11-29 10:26:18.685758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:20:39.425 [2024-11-29 10:26:18.685767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.425 [2024-11-29 10:26:18.688054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.425 [2024-11-29 10:26:18.688105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:39.425 [2024-11-29 10:26:18.688116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.265 ms 00:20:39.425 [2024-11-29 10:26:18.688127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.425 [2024-11-29 10:26:18.688250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.425 [2024-11-29 10:26:18.688263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:39.425 [2024-11-29 10:26:18.688272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:39.425 [2024-11-29 10:26:18.688283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.425 [2024-11-29 10:26:18.696355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.425 [2024-11-29 10:26:18.696413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.425 [2024-11-29 10:26:18.696429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.425 [2024-11-29 10:26:18.696439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.425 [2024-11-29 10:26:18.696504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.425 [2024-11-29 10:26:18.696520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.425 [2024-11-29 10:26:18.696528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.425 [2024-11-29 10:26:18.696538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.425 [2024-11-29 10:26:18.696600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.425 [2024-11-29 10:26:18.696615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.425 [2024-11-29 10:26:18.696626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.425 [2024-11-29 10:26:18.696636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.425 [2024-11-29 10:26:18.696654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.425 [2024-11-29 10:26:18.696665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.425 [2024-11-29 10:26:18.696672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.425 [2024-11-29 10:26:18.696682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.425 [2024-11-29 10:26:18.711534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.425 [2024-11-29 10:26:18.711604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.425 [2024-11-29 10:26:18.711623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.425 [2024-11-29 10:26:18.711634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.425 [2024-11-29 10:26:18.723639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.425 [2024-11-29 10:26:18.723708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.425 [2024-11-29 10:26:18.723720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.425 [2024-11-29 10:26:18.723731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.425 [2024-11-29 10:26:18.723848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.425 [2024-11-29 10:26:18.723867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.425 [2024-11-29 10:26:18.723877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.425 [2024-11-29 10:26:18.723890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.425 [2024-11-29 10:26:18.723938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.425 [2024-11-29 10:26:18.723950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.426 [2024-11-29 10:26:18.723958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.426 [2024-11-29 10:26:18.723968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.426 [2024-11-29 10:26:18.724047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.426 [2024-11-29 10:26:18.724065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.426 [2024-11-29 10:26:18.724073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.426 [2024-11-29 10:26:18.724084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.426 [2024-11-29 10:26:18.724123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.426 [2024-11-29 10:26:18.724136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:39.426 [2024-11-29 10:26:18.724144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.426 [2024-11-29 10:26:18.724159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.426 [2024-11-29 10:26:18.724200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.426 [2024-11-29 10:26:18.724215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.426 [2024-11-29 10:26:18.724224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.426 [2024-11-29 10:26:18.724234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.426 [2024-11-29 10:26:18.724287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.426 [2024-11-29 10:26:18.724301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.426 [2024-11-29 10:26:18.724309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.426 [2024-11-29 10:26:18.724321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.426 [2024-11-29 10:26:18.724478] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.207 ms, result 0 00:20:39.426 true 00:20:39.426 10:26:18 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 88167 00:20:39.426 10:26:18 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88167 ']' 00:20:39.426 10:26:18 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88167 00:20:39.426 10:26:18 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:39.426 10:26:18 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.426 10:26:18 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88167 00:20:39.426 10:26:18 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.426 killing process with pid 88167 00:20:39.426 10:26:18 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.426 10:26:18 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88167' 00:20:39.426 10:26:18 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 88167 00:20:39.426 10:26:18 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 88167 00:20:44.835 10:26:23 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:48.123 262144+0 records in 00:20:48.124 262144+0 records out 00:20:48.124 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.78285 s, 284 MB/s 00:20:48.124 10:26:27 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:50.672 10:26:29 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:50.672 [2024-11-29 10:26:29.592947] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:20:50.672 [2024-11-29 10:26:29.593064] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88387 ] 00:20:50.672 [2024-11-29 10:26:29.738946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.672 [2024-11-29 10:26:29.761736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.672 [2024-11-29 10:26:29.876698] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:50.672 [2024-11-29 10:26:29.876794] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:50.672 [2024-11-29 10:26:30.039139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.672 [2024-11-29 10:26:30.039207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:50.672 [2024-11-29 10:26:30.039222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:50.672 [2024-11-29 10:26:30.039237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.672 [2024-11-29 10:26:30.039302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.672 [2024-11-29 10:26:30.039314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:50.672 [2024-11-29 10:26:30.039323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:50.672 [2024-11-29 10:26:30.039337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.672 [2024-11-29 10:26:30.039364] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:50.672 [2024-11-29 10:26:30.039666] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:50.672 [2024-11-29 10:26:30.039691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.672 [2024-11-29 10:26:30.039700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:50.672 [2024-11-29 10:26:30.039711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:20:50.672 [2024-11-29 10:26:30.039724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.672 [2024-11-29 10:26:30.041554] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:50.672 [2024-11-29 10:26:30.045274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.672 [2024-11-29 10:26:30.045330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:50.672 [2024-11-29 10:26:30.045343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.724 ms 00:20:50.672 [2024-11-29 10:26:30.045361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.672 [2024-11-29 10:26:30.045439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.672 [2024-11-29 10:26:30.045452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:50.672 [2024-11-29 10:26:30.045461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:50.672 [2024-11-29 10:26:30.045469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.672 [2024-11-29 10:26:30.053906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.672 [2024-11-29 10:26:30.053951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:50.672 [2024-11-29 10:26:30.053965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.392 ms 00:20:50.672 [2024-11-29 10:26:30.053974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.672 [2024-11-29 10:26:30.054078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.672 [2024-11-29 10:26:30.054088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:50.672 [2024-11-29 10:26:30.054101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:50.672 [2024-11-29 10:26:30.054112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.672 [2024-11-29 10:26:30.054183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.672 [2024-11-29 10:26:30.054194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:50.672 [2024-11-29 10:26:30.054202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:50.672 [2024-11-29 10:26:30.054215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.672 [2024-11-29 10:26:30.054238] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:50.672 [2024-11-29 10:26:30.056347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.672 [2024-11-29 10:26:30.056386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:50.672 [2024-11-29 10:26:30.056396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.114 ms 00:20:50.672 [2024-11-29 10:26:30.056405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.672 [2024-11-29 10:26:30.056440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.672 [2024-11-29 10:26:30.056449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:50.672 [2024-11-29 10:26:30.056463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:50.672 [2024-11-29 10:26:30.056477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.672 [2024-11-29 10:26:30.056506] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:50.672 [2024-11-29 10:26:30.056527] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:50.672 [2024-11-29 10:26:30.056570] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:50.672 [2024-11-29 10:26:30.056590] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:50.672 [2024-11-29 10:26:30.056697] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:50.672 [2024-11-29 10:26:30.056709] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:50.672 [2024-11-29 10:26:30.056722] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:50.672 [2024-11-29 10:26:30.056732] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:50.672 [2024-11-29 10:26:30.056741] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:50.672 [2024-11-29 10:26:30.056750] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:50.672 [2024-11-29 10:26:30.056765] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:50.672 [2024-11-29 10:26:30.056774] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:50.672 [2024-11-29 10:26:30.056783] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:50.672 [2024-11-29 10:26:30.056793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.672 [2024-11-29 10:26:30.056818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:50.672 [2024-11-29 10:26:30.056831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:20:50.672 [2024-11-29 10:26:30.056844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.672 [2024-11-29 10:26:30.056934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.672 [2024-11-29 10:26:30.056942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:50.672 [2024-11-29 10:26:30.056950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:50.672 [2024-11-29 10:26:30.056957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.672 [2024-11-29 10:26:30.057059] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:50.672 [2024-11-29 10:26:30.057071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:50.672 [2024-11-29 10:26:30.057080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.672 [2024-11-29 10:26:30.057093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.672 [2024-11-29 10:26:30.057102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:50.672 [2024-11-29 10:26:30.057111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:50.672 [2024-11-29 10:26:30.057119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:50.672 [2024-11-29 10:26:30.057127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:50.672 [2024-11-29 10:26:30.057135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:50.672 [2024-11-29 10:26:30.057142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.672 [2024-11-29 10:26:30.057152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:50.672 [2024-11-29 10:26:30.057161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:50.672 [2024-11-29 10:26:30.057169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.673 [2024-11-29 10:26:30.057177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:50.673 [2024-11-29 10:26:30.057186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:50.673 [2024-11-29 10:26:30.057194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.673 [2024-11-29 10:26:30.057201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:50.673 [2024-11-29 10:26:30.057209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:50.673 [2024-11-29 10:26:30.057218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.673 [2024-11-29 10:26:30.057227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:50.673 [2024-11-29 10:26:30.057235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:50.673 [2024-11-29 10:26:30.057242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.673 [2024-11-29 10:26:30.057250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:50.673 [2024-11-29 10:26:30.057257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:50.673 [2024-11-29 10:26:30.057265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.673 [2024-11-29 10:26:30.057273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:50.673 [2024-11-29 10:26:30.057285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:50.673 [2024-11-29 10:26:30.057293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.673 [2024-11-29 10:26:30.057301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:50.673 [2024-11-29 10:26:30.057308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:50.673 [2024-11-29 10:26:30.057316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.673 [2024-11-29 10:26:30.057323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:50.673 [2024-11-29 10:26:30.057332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:50.673 [2024-11-29 10:26:30.057339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.673 [2024-11-29 10:26:30.057346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:50.673 [2024-11-29 10:26:30.057354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:50.673 [2024-11-29 10:26:30.057361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.673 [2024-11-29 10:26:30.057369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:50.673 [2024-11-29 10:26:30.057377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:50.673 [2024-11-29 10:26:30.057384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.673 [2024-11-29 10:26:30.057392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:50.673 [2024-11-29 10:26:30.057400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:50.673 [2024-11-29 10:26:30.057409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.673 [2024-11-29 10:26:30.057417] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:50.673 [2024-11-29 10:26:30.057427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:50.673 [2024-11-29 10:26:30.057436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.673 [2024-11-29 10:26:30.057450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.673 [2024-11-29 10:26:30.057458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:50.673 [2024-11-29 10:26:30.057465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:50.673 [2024-11-29 10:26:30.057472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:50.673 [2024-11-29 10:26:30.057479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:50.673 [2024-11-29 10:26:30.057486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:50.673 [2024-11-29 10:26:30.057493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:50.673 [2024-11-29 10:26:30.057501] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:50.673 [2024-11-29 10:26:30.057511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.673 [2024-11-29 10:26:30.057520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:50.673 [2024-11-29 10:26:30.057527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:50.673 [2024-11-29 10:26:30.057535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:50.673 [2024-11-29 10:26:30.057544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:50.673 [2024-11-29 10:26:30.057550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:50.673 [2024-11-29 10:26:30.057557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:50.673 [2024-11-29 10:26:30.057564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:50.673 [2024-11-29 10:26:30.057571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:50.673 [2024-11-29 10:26:30.057578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:50.673 [2024-11-29 10:26:30.057602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:50.673 [2024-11-29 10:26:30.057609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:50.673 [2024-11-29 10:26:30.057616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:50.673 [2024-11-29 10:26:30.057622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:50.673 [2024-11-29 10:26:30.057630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:50.673 [2024-11-29 10:26:30.057637] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:50.673 [2024-11-29 10:26:30.057646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.673 [2024-11-29 10:26:30.057654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:50.673 [2024-11-29 10:26:30.057661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:50.673 [2024-11-29 10:26:30.057668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:50.673 [2024-11-29 10:26:30.057678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:50.673 [2024-11-29 10:26:30.057685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.673 [2024-11-29 10:26:30.057692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:50.673 [2024-11-29 10:26:30.057701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:20:50.673 [2024-11-29 10:26:30.057717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.673 [2024-11-29 10:26:30.071933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.673 [2024-11-29 10:26:30.071975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:50.673 [2024-11-29 10:26:30.071988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.169 ms 00:20:50.673 [2024-11-29 10:26:30.072004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.673 [2024-11-29 10:26:30.072102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.673 [2024-11-29 10:26:30.072114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:50.673 [2024-11-29 10:26:30.072124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:50.673 [2024-11-29 10:26:30.072133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.673 [2024-11-29 10:26:30.092689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.673 [2024-11-29 10:26:30.092738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:50.673 [2024-11-29 10:26:30.092750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.497 ms 00:20:50.673 [2024-11-29 10:26:30.092758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.673 [2024-11-29 10:26:30.092836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.673 [2024-11-29 10:26:30.092847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:50.673 [2024-11-29 10:26:30.092856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:50.673 [2024-11-29 10:26:30.092868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.673 [2024-11-29 10:26:30.093363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.673 [2024-11-29 10:26:30.093399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:50.673 [2024-11-29 10:26:30.093411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:20:50.673 [2024-11-29 10:26:30.093420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.673 [2024-11-29 10:26:30.093571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.673 [2024-11-29 10:26:30.093581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:50.673 [2024-11-29 10:26:30.093591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:20:50.673 [2024-11-29 10:26:30.093600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.673 [2024-11-29 10:26:30.101110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.673 [2024-11-29 10:26:30.101151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:50.673 [2024-11-29 10:26:30.101162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.486 ms 00:20:50.673 [2024-11-29 10:26:30.101180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.673 [2024-11-29 10:26:30.104572] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:50.673 [2024-11-29 10:26:30.104619] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:50.673 [2024-11-29 10:26:30.104633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.673 [2024-11-29 10:26:30.104642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:50.673 [2024-11-29 10:26:30.104652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.356 ms 00:20:50.673 [2024-11-29 10:26:30.104660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.673 [2024-11-29 10:26:30.120418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.674 [2024-11-29 10:26:30.120466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:50.674 [2024-11-29 10:26:30.120478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.664 ms 00:20:50.674 [2024-11-29 10:26:30.120486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.674 [2024-11-29 10:26:30.123135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.674 [2024-11-29 10:26:30.123178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:50.674 [2024-11-29 10:26:30.123188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:20:50.674 [2024-11-29 10:26:30.123196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.674 [2024-11-29 10:26:30.125587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.674 [2024-11-29 10:26:30.125630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:50.674 [2024-11-29 10:26:30.125640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.344 ms 00:20:50.674 [2024-11-29 10:26:30.125647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.674 [2024-11-29 10:26:30.126036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.674 [2024-11-29 10:26:30.126059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:50.674 [2024-11-29 10:26:30.126069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:20:50.674 [2024-11-29 10:26:30.126077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.936 [2024-11-29 10:26:30.151540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.936 [2024-11-29 10:26:30.151605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:50.936 [2024-11-29 10:26:30.151625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.442 ms 00:20:50.936 [2024-11-29 10:26:30.151634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.936 [2024-11-29 10:26:30.159928] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:50.936 [2024-11-29 10:26:30.162979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.936 [2024-11-29 10:26:30.163022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:50.936 [2024-11-29 10:26:30.163034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.289 ms 00:20:50.936 [2024-11-29 10:26:30.163048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.936 [2024-11-29 10:26:30.163131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.936 [2024-11-29 10:26:30.163141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:50.936 [2024-11-29 10:26:30.163151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:50.936 [2024-11-29 10:26:30.163168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.936 [2024-11-29 10:26:30.163240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.936 [2024-11-29 10:26:30.163250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:50.936 [2024-11-29 10:26:30.163259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:50.936 [2024-11-29 10:26:30.163274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.937 [2024-11-29 10:26:30.163296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.937 [2024-11-29 10:26:30.163308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:50.937 [2024-11-29 10:26:30.163317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:50.937 [2024-11-29 10:26:30.163325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.937 [2024-11-29 10:26:30.163364] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:50.937 [2024-11-29 10:26:30.163374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.937 [2024-11-29 10:26:30.163382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:50.937 [2024-11-29 10:26:30.163390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:50.937 [2024-11-29 10:26:30.163404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.937 [2024-11-29 10:26:30.168419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.937 [2024-11-29 10:26:30.168464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:50.937 [2024-11-29 10:26:30.168476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.996 ms 00:20:50.937 [2024-11-29 10:26:30.168484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.937 [2024-11-29 10:26:30.168573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.937 [2024-11-29 10:26:30.168584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:50.937 [2024-11-29 10:26:30.168596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:50.937 [2024-11-29 10:26:30.168608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.937 [2024-11-29 10:26:30.169839] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.214 ms, result 0 00:20:51.881  [2024-11-29T10:26:32.293Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-29T10:26:33.238Z] Copying: 32/1024 [MB] (14 MBps) [2024-11-29T10:26:34.183Z] Copying: 49/1024 [MB] (17 MBps) [2024-11-29T10:26:35.572Z] Copying: 64/1024 [MB] (15 MBps) [2024-11-29T10:26:36.518Z] Copying: 74/1024 [MB] (10 MBps) [2024-11-29T10:26:37.465Z] Copying: 88/1024 [MB] (13 MBps) [2024-11-29T10:26:38.408Z] Copying: 105/1024 [MB] (17 MBps) [2024-11-29T10:26:39.355Z] Copying: 121/1024 [MB] (16 MBps) [2024-11-29T10:26:40.299Z] Copying: 141/1024 [MB] (19 MBps) [2024-11-29T10:26:41.243Z] Copying: 160/1024 [MB] (19 MBps) [2024-11-29T10:26:42.188Z] Copying: 177/1024 [MB] (17 MBps) [2024-11-29T10:26:43.578Z] Copying: 192/1024 [MB] (14 MBps) [2024-11-29T10:26:44.522Z] Copying: 208/1024 [MB] (16 MBps) [2024-11-29T10:26:45.465Z] Copying: 218/1024 [MB] (10 MBps) [2024-11-29T10:26:46.410Z] Copying: 236/1024 [MB] (17 MBps) [2024-11-29T10:26:47.356Z] Copying: 252/1024 [MB] (15 MBps) [2024-11-29T10:26:48.300Z] Copying: 262/1024 [MB] (10 MBps) [2024-11-29T10:26:49.245Z] Copying: 276/1024 [MB] (14 MBps) [2024-11-29T10:26:50.190Z] Copying: 289/1024 [MB] (13 MBps) [2024-11-29T10:26:51.574Z] Copying: 302/1024 [MB] (12 MBps) [2024-11-29T10:26:52.515Z] Copying: 312/1024 [MB] (10 MBps) [2024-11-29T10:26:53.460Z] Copying: 322/1024 [MB] (10 MBps) [2024-11-29T10:26:54.435Z] Copying: 339936/1048576 [kB] (9984 kBps) [2024-11-29T10:26:55.403Z] Copying: 342/1024 [MB] (10 MBps) [2024-11-29T10:26:56.348Z] Copying: 352/1024 [MB] (10 MBps) [2024-11-29T10:26:57.294Z] Copying: 362/1024 [MB] (10 MBps) [2024-11-29T10:26:58.241Z] Copying: 372/1024 [MB] (10 MBps) [2024-11-29T10:26:59.188Z] Copying: 383/1024 [MB] (10 MBps) [2024-11-29T10:27:00.575Z] Copying: 393/1024 [MB] (10 MBps) [2024-11-29T10:27:01.518Z] Copying: 404/1024 [MB] (11 MBps) [2024-11-29T10:27:02.460Z] Copying: 416/1024 [MB] (11 MBps) [2024-11-29T10:27:03.405Z] Copying: 439/1024 [MB] (23 MBps) [2024-11-29T10:27:04.352Z] Copying: 452/1024 [MB] (12 MBps) [2024-11-29T10:27:05.297Z] Copying: 462/1024 [MB] (10 MBps) [2024-11-29T10:27:06.235Z] Copying: 472/1024 [MB] (10 MBps) [2024-11-29T10:27:07.622Z] Copying: 484/1024 [MB] (11 MBps) [2024-11-29T10:27:08.194Z] Copying: 495/1024 [MB] (10 MBps) [2024-11-29T10:27:09.577Z] Copying: 505/1024 [MB] (10 MBps) [2024-11-29T10:27:10.514Z] Copying: 515/1024 [MB] (10 MBps) [2024-11-29T10:27:11.457Z] Copying: 555/1024 [MB] (39 MBps) [2024-11-29T10:27:12.400Z] Copying: 573/1024 [MB] (18 MBps) [2024-11-29T10:27:13.337Z] Copying: 586/1024 [MB] (12 MBps) [2024-11-29T10:27:14.273Z] Copying: 621/1024 [MB] (34 MBps) [2024-11-29T10:27:15.216Z] Copying: 650/1024 [MB] (29 MBps) [2024-11-29T10:27:16.602Z] Copying: 668/1024 [MB] (17 MBps) [2024-11-29T10:27:17.548Z] Copying: 678/1024 [MB] (10 MBps) [2024-11-29T10:27:18.492Z] Copying: 692/1024 [MB] (13 MBps) [2024-11-29T10:27:19.433Z] Copying: 711/1024 [MB] (19 MBps) [2024-11-29T10:27:20.372Z] Copying: 726/1024 [MB] (15 MBps) [2024-11-29T10:27:21.320Z] Copying: 758/1024 [MB] (31 MBps) [2024-11-29T10:27:22.264Z] Copying: 778/1024 [MB] (20 MBps) [2024-11-29T10:27:23.208Z] Copying: 797/1024 [MB] (18 MBps) [2024-11-29T10:27:24.599Z] Copying: 813/1024 [MB] (15 MBps) [2024-11-29T10:27:25.565Z] Copying: 828/1024 [MB] (14 MBps) [2024-11-29T10:27:26.552Z] Copying: 841/1024 [MB] (13 MBps) [2024-11-29T10:27:27.496Z] Copying: 863/1024 [MB] (21 MBps) [2024-11-29T10:27:28.449Z] Copying: 877/1024 [MB] (14 MBps) [2024-11-29T10:27:29.395Z] Copying: 896/1024 [MB] (19 MBps) [2024-11-29T10:27:30.341Z] Copying: 914/1024 [MB] (17 MBps) [2024-11-29T10:27:31.284Z] Copying: 929/1024 [MB] (15 MBps) [2024-11-29T10:27:32.226Z] Copying: 944/1024 [MB] (15 MBps) [2024-11-29T10:27:33.615Z] Copying: 958/1024 [MB] (14 MBps) [2024-11-29T10:27:34.187Z] Copying: 971/1024 [MB] (12 MBps) [2024-11-29T10:27:35.574Z] Copying: 982/1024 [MB] (11 MBps) [2024-11-29T10:27:36.514Z] Copying: 992/1024 [MB] (10 MBps) [2024-11-29T10:27:37.456Z] Copying: 1004/1024 [MB] (11 MBps) [2024-11-29T10:27:37.456Z] Copying: 1021/1024 [MB] (17 MBps) [2024-11-29T10:27:37.456Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-29 10:27:37.312750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.991 [2024-11-29 10:27:37.312831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:57.991 [2024-11-29 10:27:37.312848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:57.991 [2024-11-29 10:27:37.312865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.991 [2024-11-29 10:27:37.312889] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:57.991 [2024-11-29 10:27:37.313654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.991 [2024-11-29 10:27:37.313696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:57.991 [2024-11-29 10:27:37.313708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:21:57.991 [2024-11-29 10:27:37.313717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.991 [2024-11-29 10:27:37.316718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.991 [2024-11-29 10:27:37.316769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:57.991 [2024-11-29 10:27:37.316780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.975 ms 00:21:57.991 [2024-11-29 10:27:37.316789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.991 [2024-11-29 10:27:37.336119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.991 [2024-11-29 10:27:37.336171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:57.992 [2024-11-29 10:27:37.336182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.291 ms 00:21:57.992 [2024-11-29 10:27:37.336191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.992 [2024-11-29 10:27:37.342347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.992 [2024-11-29 10:27:37.342386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:57.992 [2024-11-29 10:27:37.342399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.115 ms 00:21:57.992 [2024-11-29 10:27:37.342406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.992 [2024-11-29 10:27:37.345672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.992 [2024-11-29 10:27:37.345723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:57.992 [2024-11-29 10:27:37.345734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.187 ms 00:21:57.992 [2024-11-29 10:27:37.345742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.992 [2024-11-29 10:27:37.350917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.992 [2024-11-29 10:27:37.350968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:57.992 [2024-11-29 10:27:37.350991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.131 ms 00:21:57.992 [2024-11-29 10:27:37.350999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.992 [2024-11-29 10:27:37.351125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.992 [2024-11-29 10:27:37.351136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:57.992 [2024-11-29 10:27:37.351151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:21:57.992 [2024-11-29 10:27:37.351159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.992 [2024-11-29 10:27:37.354344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.992 [2024-11-29 10:27:37.354395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:57.992 [2024-11-29 10:27:37.354406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.163 ms 00:21:57.992 [2024-11-29 10:27:37.354413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.992 [2024-11-29 10:27:37.357533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.992 [2024-11-29 10:27:37.357584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:57.992 [2024-11-29 10:27:37.357594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.076 ms 00:21:57.992 [2024-11-29 10:27:37.357602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.992 [2024-11-29 10:27:37.359912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.992 [2024-11-29 10:27:37.359969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:57.992 [2024-11-29 10:27:37.359979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.267 ms 00:21:57.992 [2024-11-29 10:27:37.359986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.992 [2024-11-29 10:27:37.362383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.992 [2024-11-29 10:27:37.362430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:57.992 [2024-11-29 10:27:37.362441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.327 ms 00:21:57.992 [2024-11-29 10:27:37.362448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.992 [2024-11-29 10:27:37.362488] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:57.992 [2024-11-29 10:27:37.362504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:57.992 [2024-11-29 10:27:37.362944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.362953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.362961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.362968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.362975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.362984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.362991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.362999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:57.993 [2024-11-29 10:27:37.363303] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:57.993 [2024-11-29 10:27:37.363311] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b21615cc-511d-4102-af29-37fdb7e3e0e1 00:21:57.993 [2024-11-29 10:27:37.363327] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:57.993 [2024-11-29 10:27:37.363335] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:57.993 [2024-11-29 10:27:37.363343] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:57.993 [2024-11-29 10:27:37.363351] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:57.993 [2024-11-29 10:27:37.363359] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:57.993 [2024-11-29 10:27:37.363367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:57.993 [2024-11-29 10:27:37.363379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:57.993 [2024-11-29 10:27:37.363386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:57.993 [2024-11-29 10:27:37.363392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:57.993 [2024-11-29 10:27:37.363399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.993 [2024-11-29 10:27:37.363411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:57.993 [2024-11-29 10:27:37.363423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:21:57.993 [2024-11-29 10:27:37.363430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.993 [2024-11-29 10:27:37.365717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.993 [2024-11-29 10:27:37.365759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:57.993 [2024-11-29 10:27:37.365769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.270 ms 00:21:57.993 [2024-11-29 10:27:37.365777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.993 [2024-11-29 10:27:37.365914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.993 [2024-11-29 10:27:37.365925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:57.993 [2024-11-29 10:27:37.365934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:21:57.993 [2024-11-29 10:27:37.365941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.993 [2024-11-29 10:27:37.373580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.993 [2024-11-29 10:27:37.373634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:57.993 [2024-11-29 10:27:37.373645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.993 [2024-11-29 10:27:37.373653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.993 [2024-11-29 10:27:37.373718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.993 [2024-11-29 10:27:37.373727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:57.993 [2024-11-29 10:27:37.373741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.993 [2024-11-29 10:27:37.373749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.993 [2024-11-29 10:27:37.373832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.993 [2024-11-29 10:27:37.373844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:57.993 [2024-11-29 10:27:37.373852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.993 [2024-11-29 10:27:37.373859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.993 [2024-11-29 10:27:37.373879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.993 [2024-11-29 10:27:37.373890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:57.993 [2024-11-29 10:27:37.373897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.993 [2024-11-29 10:27:37.373905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.993 [2024-11-29 10:27:37.387839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.993 [2024-11-29 10:27:37.387899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:57.993 [2024-11-29 10:27:37.387910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.993 [2024-11-29 10:27:37.387919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.993 [2024-11-29 10:27:37.398278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.993 [2024-11-29 10:27:37.398340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:57.993 [2024-11-29 10:27:37.398351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.993 [2024-11-29 10:27:37.398359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.993 [2024-11-29 10:27:37.398409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.993 [2024-11-29 10:27:37.398419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:57.993 [2024-11-29 10:27:37.398428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.993 [2024-11-29 10:27:37.398436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.993 [2024-11-29 10:27:37.398477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.993 [2024-11-29 10:27:37.398487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:57.993 [2024-11-29 10:27:37.398499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.993 [2024-11-29 10:27:37.398507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.994 [2024-11-29 10:27:37.398585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.994 [2024-11-29 10:27:37.398595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:57.994 [2024-11-29 10:27:37.398608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.994 [2024-11-29 10:27:37.398616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.994 [2024-11-29 10:27:37.398646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.994 [2024-11-29 10:27:37.398655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:57.994 [2024-11-29 10:27:37.398663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.994 [2024-11-29 10:27:37.398674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.994 [2024-11-29 10:27:37.398714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.994 [2024-11-29 10:27:37.398727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:57.994 [2024-11-29 10:27:37.398735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.994 [2024-11-29 10:27:37.398743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.994 [2024-11-29 10:27:37.398790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.994 [2024-11-29 10:27:37.398818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:57.994 [2024-11-29 10:27:37.398830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.994 [2024-11-29 10:27:37.398842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.994 [2024-11-29 10:27:37.398977] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 86.193 ms, result 0 00:21:58.256 00:21:58.256 00:21:58.256 10:27:37 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:58.517 [2024-11-29 10:27:37.738630] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:21:58.517 [2024-11-29 10:27:37.738773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89085 ] 00:21:58.517 [2024-11-29 10:27:37.887972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.517 [2024-11-29 10:27:37.917142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.780 [2024-11-29 10:27:38.036222] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:58.780 [2024-11-29 10:27:38.036314] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:58.780 [2024-11-29 10:27:38.196735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.780 [2024-11-29 10:27:38.196823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:58.780 [2024-11-29 10:27:38.196839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:58.780 [2024-11-29 10:27:38.196852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.780 [2024-11-29 10:27:38.196910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.780 [2024-11-29 10:27:38.196925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:58.780 [2024-11-29 10:27:38.196935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:58.780 [2024-11-29 10:27:38.196954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.780 [2024-11-29 10:27:38.196983] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:58.780 [2024-11-29 10:27:38.197609] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:58.780 [2024-11-29 10:27:38.197666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.780 [2024-11-29 10:27:38.197678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:58.780 [2024-11-29 10:27:38.197692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:21:58.780 [2024-11-29 10:27:38.197701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.780 [2024-11-29 10:27:38.199507] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:58.780 [2024-11-29 10:27:38.203665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.780 [2024-11-29 10:27:38.203724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:58.780 [2024-11-29 10:27:38.203740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.161 ms 00:21:58.780 [2024-11-29 10:27:38.203754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.780 [2024-11-29 10:27:38.203845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.780 [2024-11-29 10:27:38.203859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:58.780 [2024-11-29 10:27:38.203869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:58.780 [2024-11-29 10:27:38.203878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.780 [2024-11-29 10:27:38.212558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.780 [2024-11-29 10:27:38.212600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:58.780 [2024-11-29 10:27:38.212618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.634 ms 00:21:58.780 [2024-11-29 10:27:38.212631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.780 [2024-11-29 10:27:38.212741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.780 [2024-11-29 10:27:38.212752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:58.780 [2024-11-29 10:27:38.212761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:21:58.780 [2024-11-29 10:27:38.212771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.780 [2024-11-29 10:27:38.212848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.780 [2024-11-29 10:27:38.212864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:58.780 [2024-11-29 10:27:38.212873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:58.780 [2024-11-29 10:27:38.212885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.780 [2024-11-29 10:27:38.212908] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:58.780 [2024-11-29 10:27:38.215148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.780 [2024-11-29 10:27:38.215192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:58.780 [2024-11-29 10:27:38.215203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.245 ms 00:21:58.780 [2024-11-29 10:27:38.215210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.780 [2024-11-29 10:27:38.215246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.780 [2024-11-29 10:27:38.215254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:58.780 [2024-11-29 10:27:38.215266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:58.780 [2024-11-29 10:27:38.215276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.780 [2024-11-29 10:27:38.215301] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:58.780 [2024-11-29 10:27:38.215321] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:58.780 [2024-11-29 10:27:38.215363] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:58.780 [2024-11-29 10:27:38.215380] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:58.780 [2024-11-29 10:27:38.215485] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:58.780 [2024-11-29 10:27:38.215496] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:58.780 [2024-11-29 10:27:38.215510] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:58.780 [2024-11-29 10:27:38.215521] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:58.780 [2024-11-29 10:27:38.215537] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:58.780 [2024-11-29 10:27:38.215546] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:58.780 [2024-11-29 10:27:38.215554] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:58.780 [2024-11-29 10:27:38.215563] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:58.780 [2024-11-29 10:27:38.215570] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:58.780 [2024-11-29 10:27:38.215582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.780 [2024-11-29 10:27:38.215590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:58.780 [2024-11-29 10:27:38.215599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:21:58.780 [2024-11-29 10:27:38.215609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.780 [2024-11-29 10:27:38.215695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.780 [2024-11-29 10:27:38.215710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:58.780 [2024-11-29 10:27:38.215722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:58.780 [2024-11-29 10:27:38.215729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.780 [2024-11-29 10:27:38.215849] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:58.780 [2024-11-29 10:27:38.215862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:58.780 [2024-11-29 10:27:38.215872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:58.780 [2024-11-29 10:27:38.215881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.780 [2024-11-29 10:27:38.215890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:58.780 [2024-11-29 10:27:38.215898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:58.780 [2024-11-29 10:27:38.215907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:58.780 [2024-11-29 10:27:38.215915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:58.780 [2024-11-29 10:27:38.215923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:58.780 [2024-11-29 10:27:38.215931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:58.780 [2024-11-29 10:27:38.215942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:58.780 [2024-11-29 10:27:38.215953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:58.780 [2024-11-29 10:27:38.215961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:58.780 [2024-11-29 10:27:38.215969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:58.780 [2024-11-29 10:27:38.215978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:58.780 [2024-11-29 10:27:38.215986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.780 [2024-11-29 10:27:38.215995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:58.780 [2024-11-29 10:27:38.216003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:58.780 [2024-11-29 10:27:38.216010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.781 [2024-11-29 10:27:38.216018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:58.781 [2024-11-29 10:27:38.216027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:58.781 [2024-11-29 10:27:38.216035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.781 [2024-11-29 10:27:38.216043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:58.781 [2024-11-29 10:27:38.216050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:58.781 [2024-11-29 10:27:38.216058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.781 [2024-11-29 10:27:38.216066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:58.781 [2024-11-29 10:27:38.216081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:58.781 [2024-11-29 10:27:38.216088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.781 [2024-11-29 10:27:38.216096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:58.781 [2024-11-29 10:27:38.216104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:58.781 [2024-11-29 10:27:38.216112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.781 [2024-11-29 10:27:38.216120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:58.781 [2024-11-29 10:27:38.216127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:58.781 [2024-11-29 10:27:38.216135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:58.781 [2024-11-29 10:27:38.216143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:58.781 [2024-11-29 10:27:38.216152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:58.781 [2024-11-29 10:27:38.216160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:58.781 [2024-11-29 10:27:38.216168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:58.781 [2024-11-29 10:27:38.216176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:58.781 [2024-11-29 10:27:38.216183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.781 [2024-11-29 10:27:38.216191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:58.781 [2024-11-29 10:27:38.216199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:58.781 [2024-11-29 10:27:38.216208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.781 [2024-11-29 10:27:38.216219] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:58.781 [2024-11-29 10:27:38.216230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:58.781 [2024-11-29 10:27:38.216239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:58.781 [2024-11-29 10:27:38.216247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.781 [2024-11-29 10:27:38.216260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:58.781 [2024-11-29 10:27:38.216268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:58.781 [2024-11-29 10:27:38.216276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:58.781 [2024-11-29 10:27:38.216284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:58.781 [2024-11-29 10:27:38.216292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:58.781 [2024-11-29 10:27:38.216300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:58.781 [2024-11-29 10:27:38.216310] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:58.781 [2024-11-29 10:27:38.216321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:58.781 [2024-11-29 10:27:38.216331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:58.781 [2024-11-29 10:27:38.216339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:58.781 [2024-11-29 10:27:38.216348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:58.781 [2024-11-29 10:27:38.216359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:58.781 [2024-11-29 10:27:38.216380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:58.781 [2024-11-29 10:27:38.216388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:58.781 [2024-11-29 10:27:38.216397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:58.781 [2024-11-29 10:27:38.216405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:58.781 [2024-11-29 10:27:38.216414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:58.781 [2024-11-29 10:27:38.216429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:58.781 [2024-11-29 10:27:38.216436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:58.781 [2024-11-29 10:27:38.216444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:58.781 [2024-11-29 10:27:38.216451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:58.781 [2024-11-29 10:27:38.216459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:58.781 [2024-11-29 10:27:38.216466] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:58.781 [2024-11-29 10:27:38.216475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:58.781 [2024-11-29 10:27:38.216486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:58.781 [2024-11-29 10:27:38.216493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:58.781 [2024-11-29 10:27:38.216501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:58.781 [2024-11-29 10:27:38.216510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:58.781 [2024-11-29 10:27:38.216520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.781 [2024-11-29 10:27:38.216528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:58.781 [2024-11-29 10:27:38.216536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:21:58.781 [2024-11-29 10:27:38.216546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.781 [2024-11-29 10:27:38.230975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.781 [2024-11-29 10:27:38.231019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:58.781 [2024-11-29 10:27:38.231031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.379 ms 00:21:58.781 [2024-11-29 10:27:38.231039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.781 [2024-11-29 10:27:38.231128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.781 [2024-11-29 10:27:38.231137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:58.781 [2024-11-29 10:27:38.231145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:58.781 [2024-11-29 10:27:38.231154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.043 [2024-11-29 10:27:38.250315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.043 [2024-11-29 10:27:38.250375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:59.043 [2024-11-29 10:27:38.250390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.101 ms 00:21:59.043 [2024-11-29 10:27:38.250399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.043 [2024-11-29 10:27:38.250452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.043 [2024-11-29 10:27:38.250463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:59.043 [2024-11-29 10:27:38.250472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:59.043 [2024-11-29 10:27:38.250487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.043 [2024-11-29 10:27:38.251100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.043 [2024-11-29 10:27:38.251142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:59.043 [2024-11-29 10:27:38.251155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:21:59.043 [2024-11-29 10:27:38.251165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.043 [2024-11-29 10:27:38.251331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.043 [2024-11-29 10:27:38.251342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:59.043 [2024-11-29 10:27:38.251352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:21:59.043 [2024-11-29 10:27:38.251362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.043 [2024-11-29 10:27:38.259362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.043 [2024-11-29 10:27:38.259409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:59.043 [2024-11-29 10:27:38.259419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.971 ms 00:21:59.043 [2024-11-29 10:27:38.259435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.043 [2024-11-29 10:27:38.263447] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:59.043 [2024-11-29 10:27:38.263500] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:59.043 [2024-11-29 10:27:38.263517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.043 [2024-11-29 10:27:38.263526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:59.043 [2024-11-29 10:27:38.263535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.985 ms 00:21:59.043 [2024-11-29 10:27:38.263542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.043 [2024-11-29 10:27:38.279352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.043 [2024-11-29 10:27:38.279399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:59.043 [2024-11-29 10:27:38.279411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.750 ms 00:21:59.043 [2024-11-29 10:27:38.279419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.043 [2024-11-29 10:27:38.282278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.043 [2024-11-29 10:27:38.282323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:59.043 [2024-11-29 10:27:38.282332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.796 ms 00:21:59.043 [2024-11-29 10:27:38.282340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.043 [2024-11-29 10:27:38.284988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.043 [2024-11-29 10:27:38.285037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:59.044 [2024-11-29 10:27:38.285047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:21:59.044 [2024-11-29 10:27:38.285055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.044 [2024-11-29 10:27:38.285399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.044 [2024-11-29 10:27:38.285420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:59.044 [2024-11-29 10:27:38.285431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:21:59.044 [2024-11-29 10:27:38.285439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.044 [2024-11-29 10:27:38.310343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.044 [2024-11-29 10:27:38.310402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:59.044 [2024-11-29 10:27:38.310415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.870 ms 00:21:59.044 [2024-11-29 10:27:38.310423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.044 [2024-11-29 10:27:38.318507] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:59.044 [2024-11-29 10:27:38.321473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.044 [2024-11-29 10:27:38.321513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:59.044 [2024-11-29 10:27:38.321525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.998 ms 00:21:59.044 [2024-11-29 10:27:38.321539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.044 [2024-11-29 10:27:38.321615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.044 [2024-11-29 10:27:38.321627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:59.044 [2024-11-29 10:27:38.321643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:59.044 [2024-11-29 10:27:38.321653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.044 [2024-11-29 10:27:38.321723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.044 [2024-11-29 10:27:38.321738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:59.044 [2024-11-29 10:27:38.321747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:59.044 [2024-11-29 10:27:38.321755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.044 [2024-11-29 10:27:38.321776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.044 [2024-11-29 10:27:38.321789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:59.044 [2024-11-29 10:27:38.321815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:59.044 [2024-11-29 10:27:38.321824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.044 [2024-11-29 10:27:38.321863] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:59.044 [2024-11-29 10:27:38.321873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.044 [2024-11-29 10:27:38.321882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:59.044 [2024-11-29 10:27:38.321892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:59.044 [2024-11-29 10:27:38.321900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.044 [2024-11-29 10:27:38.327537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.044 [2024-11-29 10:27:38.327588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:59.044 [2024-11-29 10:27:38.327599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.617 ms 00:21:59.044 [2024-11-29 10:27:38.327617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.044 [2024-11-29 10:27:38.327704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.044 [2024-11-29 10:27:38.327715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:59.044 [2024-11-29 10:27:38.327728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:59.044 [2024-11-29 10:27:38.327739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.044 [2024-11-29 10:27:38.329239] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 132.000 ms, result 0 00:22:00.435  [2024-11-29T10:27:40.841Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-29T10:27:41.786Z] Copying: 31/1024 [MB] (13 MBps) [2024-11-29T10:27:42.733Z] Copying: 44/1024 [MB] (12 MBps) [2024-11-29T10:27:43.677Z] Copying: 58/1024 [MB] (14 MBps) [2024-11-29T10:27:44.623Z] Copying: 69/1024 [MB] (10 MBps) [2024-11-29T10:27:45.567Z] Copying: 82/1024 [MB] (13 MBps) [2024-11-29T10:27:46.511Z] Copying: 97/1024 [MB] (14 MBps) [2024-11-29T10:27:47.900Z] Copying: 114/1024 [MB] (17 MBps) [2024-11-29T10:27:48.841Z] Copying: 136/1024 [MB] (21 MBps) [2024-11-29T10:27:49.787Z] Copying: 156/1024 [MB] (20 MBps) [2024-11-29T10:27:50.731Z] Copying: 173/1024 [MB] (17 MBps) [2024-11-29T10:27:51.671Z] Copying: 190/1024 [MB] (16 MBps) [2024-11-29T10:27:52.612Z] Copying: 212/1024 [MB] (21 MBps) [2024-11-29T10:27:53.558Z] Copying: 224/1024 [MB] (12 MBps) [2024-11-29T10:27:54.946Z] Copying: 235/1024 [MB] (11 MBps) [2024-11-29T10:27:55.527Z] Copying: 248/1024 [MB] (12 MBps) [2024-11-29T10:27:56.958Z] Copying: 259/1024 [MB] (11 MBps) [2024-11-29T10:27:57.575Z] Copying: 270/1024 [MB] (10 MBps) [2024-11-29T10:27:58.520Z] Copying: 283/1024 [MB] (12 MBps) [2024-11-29T10:27:59.909Z] Copying: 298/1024 [MB] (14 MBps) [2024-11-29T10:28:00.851Z] Copying: 311/1024 [MB] (13 MBps) [2024-11-29T10:28:01.794Z] Copying: 327/1024 [MB] (16 MBps) [2024-11-29T10:28:02.735Z] Copying: 338/1024 [MB] (10 MBps) [2024-11-29T10:28:03.678Z] Copying: 348/1024 [MB] (10 MBps) [2024-11-29T10:28:04.624Z] Copying: 359/1024 [MB] (10 MBps) [2024-11-29T10:28:05.570Z] Copying: 369/1024 [MB] (10 MBps) [2024-11-29T10:28:06.516Z] Copying: 380/1024 [MB] (10 MBps) [2024-11-29T10:28:07.907Z] Copying: 390/1024 [MB] (10 MBps) [2024-11-29T10:28:08.852Z] Copying: 400/1024 [MB] (10 MBps) [2024-11-29T10:28:09.798Z] Copying: 411/1024 [MB] (10 MBps) [2024-11-29T10:28:10.744Z] Copying: 422/1024 [MB] (10 MBps) [2024-11-29T10:28:11.689Z] Copying: 433/1024 [MB] (10 MBps) [2024-11-29T10:28:12.634Z] Copying: 443/1024 [MB] (10 MBps) [2024-11-29T10:28:13.579Z] Copying: 454/1024 [MB] (11 MBps) [2024-11-29T10:28:14.522Z] Copying: 465/1024 [MB] (10 MBps) [2024-11-29T10:28:15.910Z] Copying: 475/1024 [MB] (10 MBps) [2024-11-29T10:28:16.874Z] Copying: 486/1024 [MB] (10 MBps) [2024-11-29T10:28:17.821Z] Copying: 496/1024 [MB] (10 MBps) [2024-11-29T10:28:18.766Z] Copying: 506/1024 [MB] (10 MBps) [2024-11-29T10:28:19.711Z] Copying: 517/1024 [MB] (10 MBps) [2024-11-29T10:28:20.658Z] Copying: 527/1024 [MB] (10 MBps) [2024-11-29T10:28:21.605Z] Copying: 538/1024 [MB] (10 MBps) [2024-11-29T10:28:22.551Z] Copying: 548/1024 [MB] (10 MBps) [2024-11-29T10:28:23.941Z] Copying: 560/1024 [MB] (11 MBps) [2024-11-29T10:28:24.515Z] Copying: 571/1024 [MB] (10 MBps) [2024-11-29T10:28:25.901Z] Copying: 581/1024 [MB] (10 MBps) [2024-11-29T10:28:26.848Z] Copying: 591/1024 [MB] (10 MBps) [2024-11-29T10:28:27.795Z] Copying: 602/1024 [MB] (10 MBps) [2024-11-29T10:28:28.746Z] Copying: 612/1024 [MB] (10 MBps) [2024-11-29T10:28:29.806Z] Copying: 622/1024 [MB] (10 MBps) [2024-11-29T10:28:30.752Z] Copying: 633/1024 [MB] (10 MBps) [2024-11-29T10:28:31.693Z] Copying: 644/1024 [MB] (10 MBps) [2024-11-29T10:28:32.640Z] Copying: 659/1024 [MB] (15 MBps) [2024-11-29T10:28:33.586Z] Copying: 670/1024 [MB] (10 MBps) [2024-11-29T10:28:34.531Z] Copying: 681/1024 [MB] (10 MBps) [2024-11-29T10:28:35.918Z] Copying: 691/1024 [MB] (10 MBps) [2024-11-29T10:28:36.859Z] Copying: 702/1024 [MB] (10 MBps) [2024-11-29T10:28:37.800Z] Copying: 712/1024 [MB] (10 MBps) [2024-11-29T10:28:38.743Z] Copying: 722/1024 [MB] (10 MBps) [2024-11-29T10:28:39.677Z] Copying: 733/1024 [MB] (10 MBps) [2024-11-29T10:28:40.616Z] Copying: 759/1024 [MB] (25 MBps) [2024-11-29T10:28:41.558Z] Copying: 779/1024 [MB] (20 MBps) [2024-11-29T10:28:42.943Z] Copying: 789/1024 [MB] (10 MBps) [2024-11-29T10:28:43.515Z] Copying: 802/1024 [MB] (12 MBps) [2024-11-29T10:28:44.900Z] Copying: 815/1024 [MB] (13 MBps) [2024-11-29T10:28:45.840Z] Copying: 826/1024 [MB] (10 MBps) [2024-11-29T10:28:46.801Z] Copying: 836/1024 [MB] (10 MBps) [2024-11-29T10:28:47.742Z] Copying: 847/1024 [MB] (10 MBps) [2024-11-29T10:28:48.684Z] Copying: 862/1024 [MB] (15 MBps) [2024-11-29T10:28:49.625Z] Copying: 873/1024 [MB] (10 MBps) [2024-11-29T10:28:50.567Z] Copying: 884/1024 [MB] (10 MBps) [2024-11-29T10:28:51.509Z] Copying: 894/1024 [MB] (10 MBps) [2024-11-29T10:28:52.892Z] Copying: 908/1024 [MB] (13 MBps) [2024-11-29T10:28:53.833Z] Copying: 918/1024 [MB] (10 MBps) [2024-11-29T10:28:54.774Z] Copying: 936/1024 [MB] (17 MBps) [2024-11-29T10:28:55.713Z] Copying: 947/1024 [MB] (10 MBps) [2024-11-29T10:28:56.657Z] Copying: 957/1024 [MB] (10 MBps) [2024-11-29T10:28:57.603Z] Copying: 969/1024 [MB] (11 MBps) [2024-11-29T10:28:58.548Z] Copying: 982/1024 [MB] (12 MBps) [2024-11-29T10:28:59.936Z] Copying: 1001/1024 [MB] (18 MBps) [2024-11-29T10:29:00.198Z] Copying: 1014/1024 [MB] (13 MBps) [2024-11-29T10:29:00.504Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-29 10:29:00.362668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.039 [2024-11-29 10:29:00.362759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:21.039 [2024-11-29 10:29:00.362785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:21.039 [2024-11-29 10:29:00.362816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.039 [2024-11-29 10:29:00.362851] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:21.039 [2024-11-29 10:29:00.363692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.039 [2024-11-29 10:29:00.363733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:21.039 [2024-11-29 10:29:00.363761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:23:21.039 [2024-11-29 10:29:00.363774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.039 [2024-11-29 10:29:00.364144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.039 [2024-11-29 10:29:00.364163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:21.039 [2024-11-29 10:29:00.364182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:23:21.039 [2024-11-29 10:29:00.364197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.039 [2024-11-29 10:29:00.370046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.039 [2024-11-29 10:29:00.370116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:21.039 [2024-11-29 10:29:00.370131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.827 ms 00:23:21.039 [2024-11-29 10:29:00.370142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.039 [2024-11-29 10:29:00.376771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.039 [2024-11-29 10:29:00.376824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:21.039 [2024-11-29 10:29:00.376837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.589 ms 00:23:21.039 [2024-11-29 10:29:00.376845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.039 [2024-11-29 10:29:00.379858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.039 [2024-11-29 10:29:00.379907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:21.039 [2024-11-29 10:29:00.379917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.923 ms 00:23:21.039 [2024-11-29 10:29:00.379926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.039 [2024-11-29 10:29:00.385193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.039 [2024-11-29 10:29:00.385241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:21.039 [2024-11-29 10:29:00.385252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.221 ms 00:23:21.039 [2024-11-29 10:29:00.385261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.039 [2024-11-29 10:29:00.385386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.039 [2024-11-29 10:29:00.385396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:21.039 [2024-11-29 10:29:00.385415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:23:21.039 [2024-11-29 10:29:00.385427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.039 [2024-11-29 10:29:00.389190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.039 [2024-11-29 10:29:00.389252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:21.039 [2024-11-29 10:29:00.389265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.733 ms 00:23:21.039 [2024-11-29 10:29:00.389273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.039 [2024-11-29 10:29:00.392027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.039 [2024-11-29 10:29:00.392071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:21.039 [2024-11-29 10:29:00.392083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.706 ms 00:23:21.039 [2024-11-29 10:29:00.392091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.039 [2024-11-29 10:29:00.395252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.039 [2024-11-29 10:29:00.395303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:21.039 [2024-11-29 10:29:00.395313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.117 ms 00:23:21.039 [2024-11-29 10:29:00.395321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.039 [2024-11-29 10:29:00.397725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.039 [2024-11-29 10:29:00.397766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:21.039 [2024-11-29 10:29:00.397776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.324 ms 00:23:21.039 [2024-11-29 10:29:00.397784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.039 [2024-11-29 10:29:00.397839] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:21.039 [2024-11-29 10:29:00.397866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.397878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.397886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.397895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.397905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.397913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.397921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.397929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.397937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.397945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.397953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.397961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.397969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.397977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.397985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.397993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.398003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.398010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.398018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.398025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.398033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.398040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.398048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.398055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.398063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.398070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.398091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.398099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.398106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:21.039 [2024-11-29 10:29:00.398115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:21.040 [2024-11-29 10:29:00.398707] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:21.040 [2024-11-29 10:29:00.398716] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b21615cc-511d-4102-af29-37fdb7e3e0e1 00:23:21.040 [2024-11-29 10:29:00.398725] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:21.040 [2024-11-29 10:29:00.398733] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:21.040 [2024-11-29 10:29:00.398741] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:21.040 [2024-11-29 10:29:00.398749] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:21.040 [2024-11-29 10:29:00.398757] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:21.040 [2024-11-29 10:29:00.398766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:21.040 [2024-11-29 10:29:00.398776] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:21.040 [2024-11-29 10:29:00.398783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:21.040 [2024-11-29 10:29:00.398789] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:21.040 [2024-11-29 10:29:00.398797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.040 [2024-11-29 10:29:00.398825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:21.040 [2024-11-29 10:29:00.398834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:23:21.040 [2024-11-29 10:29:00.398842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.040 [2024-11-29 10:29:00.401350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.040 [2024-11-29 10:29:00.401389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:21.040 [2024-11-29 10:29:00.401400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.488 ms 00:23:21.040 [2024-11-29 10:29:00.401409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.040 [2024-11-29 10:29:00.401542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.040 [2024-11-29 10:29:00.401551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:21.040 [2024-11-29 10:29:00.401564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:23:21.040 [2024-11-29 10:29:00.401572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.040 [2024-11-29 10:29:00.409532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.040 [2024-11-29 10:29:00.409577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:21.041 [2024-11-29 10:29:00.409588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.041 [2024-11-29 10:29:00.409596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.041 [2024-11-29 10:29:00.409661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.041 [2024-11-29 10:29:00.409669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:21.041 [2024-11-29 10:29:00.409678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.041 [2024-11-29 10:29:00.409685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.041 [2024-11-29 10:29:00.409753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.041 [2024-11-29 10:29:00.409763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:21.041 [2024-11-29 10:29:00.409772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.041 [2024-11-29 10:29:00.409786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.041 [2024-11-29 10:29:00.409826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.041 [2024-11-29 10:29:00.409836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:21.041 [2024-11-29 10:29:00.409845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.041 [2024-11-29 10:29:00.409853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.041 [2024-11-29 10:29:00.425080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.041 [2024-11-29 10:29:00.425127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:21.041 [2024-11-29 10:29:00.425139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.041 [2024-11-29 10:29:00.425148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.041 [2024-11-29 10:29:00.436525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.041 [2024-11-29 10:29:00.436594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:21.041 [2024-11-29 10:29:00.436606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.041 [2024-11-29 10:29:00.436614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.041 [2024-11-29 10:29:00.436666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.041 [2024-11-29 10:29:00.436676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:21.041 [2024-11-29 10:29:00.436686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.041 [2024-11-29 10:29:00.436694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.041 [2024-11-29 10:29:00.436733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.041 [2024-11-29 10:29:00.436747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:21.041 [2024-11-29 10:29:00.436756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.041 [2024-11-29 10:29:00.436764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.041 [2024-11-29 10:29:00.436857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.041 [2024-11-29 10:29:00.436868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:21.041 [2024-11-29 10:29:00.436877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.041 [2024-11-29 10:29:00.436885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.041 [2024-11-29 10:29:00.436917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.041 [2024-11-29 10:29:00.436926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:21.041 [2024-11-29 10:29:00.436938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.041 [2024-11-29 10:29:00.436946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.041 [2024-11-29 10:29:00.436993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.041 [2024-11-29 10:29:00.437003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:21.041 [2024-11-29 10:29:00.437012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.041 [2024-11-29 10:29:00.437020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.041 [2024-11-29 10:29:00.437070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.041 [2024-11-29 10:29:00.437083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:21.041 [2024-11-29 10:29:00.437091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.041 [2024-11-29 10:29:00.437108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.041 [2024-11-29 10:29:00.437245] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 74.550 ms, result 0 00:23:21.330 00:23:21.330 00:23:21.330 10:29:00 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:23.875 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:23.875 10:29:02 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:23.875 [2024-11-29 10:29:02.978301] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:23:23.875 [2024-11-29 10:29:02.978446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89964 ] 00:23:23.875 [2024-11-29 10:29:03.127591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.875 [2024-11-29 10:29:03.156781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.875 [2024-11-29 10:29:03.275535] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:23.875 [2024-11-29 10:29:03.275629] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:24.136 [2024-11-29 10:29:03.433547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.136 [2024-11-29 10:29:03.433594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:24.136 [2024-11-29 10:29:03.433607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:24.136 [2024-11-29 10:29:03.433615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.136 [2024-11-29 10:29:03.433662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.136 [2024-11-29 10:29:03.433672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:24.136 [2024-11-29 10:29:03.433680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:24.136 [2024-11-29 10:29:03.433693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.136 [2024-11-29 10:29:03.433714] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:24.136 [2024-11-29 10:29:03.434267] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:24.136 [2024-11-29 10:29:03.434321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.136 [2024-11-29 10:29:03.434342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:24.136 [2024-11-29 10:29:03.434361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:23:24.136 [2024-11-29 10:29:03.434373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.136 [2024-11-29 10:29:03.435593] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:24.136 [2024-11-29 10:29:03.438384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.136 [2024-11-29 10:29:03.438420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:24.136 [2024-11-29 10:29:03.438431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.793 ms 00:23:24.136 [2024-11-29 10:29:03.438445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.136 [2024-11-29 10:29:03.438496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.136 [2024-11-29 10:29:03.438508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:24.136 [2024-11-29 10:29:03.438516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:24.136 [2024-11-29 10:29:03.438523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.137 [2024-11-29 10:29:03.444068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.137 [2024-11-29 10:29:03.444103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:24.137 [2024-11-29 10:29:03.444117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.499 ms 00:23:24.137 [2024-11-29 10:29:03.444129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.137 [2024-11-29 10:29:03.444208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.137 [2024-11-29 10:29:03.444217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:24.137 [2024-11-29 10:29:03.444225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:24.137 [2024-11-29 10:29:03.444237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.137 [2024-11-29 10:29:03.444277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.137 [2024-11-29 10:29:03.444287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:24.137 [2024-11-29 10:29:03.444295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:24.137 [2024-11-29 10:29:03.444307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.137 [2024-11-29 10:29:03.444328] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:24.137 [2024-11-29 10:29:03.445908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.137 [2024-11-29 10:29:03.445936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:24.137 [2024-11-29 10:29:03.445945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.585 ms 00:23:24.137 [2024-11-29 10:29:03.445952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.137 [2024-11-29 10:29:03.445982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.137 [2024-11-29 10:29:03.445990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:24.137 [2024-11-29 10:29:03.445998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:24.137 [2024-11-29 10:29:03.446008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.137 [2024-11-29 10:29:03.446027] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:24.137 [2024-11-29 10:29:03.446046] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:24.137 [2024-11-29 10:29:03.446105] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:24.137 [2024-11-29 10:29:03.446121] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:24.137 [2024-11-29 10:29:03.446245] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:24.137 [2024-11-29 10:29:03.446260] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:24.137 [2024-11-29 10:29:03.446275] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:24.137 [2024-11-29 10:29:03.446289] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:24.137 [2024-11-29 10:29:03.446301] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:24.137 [2024-11-29 10:29:03.446314] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:24.137 [2024-11-29 10:29:03.446326] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:24.137 [2024-11-29 10:29:03.446334] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:24.137 [2024-11-29 10:29:03.446342] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:24.137 [2024-11-29 10:29:03.446355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.137 [2024-11-29 10:29:03.446365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:24.137 [2024-11-29 10:29:03.446377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:23:24.137 [2024-11-29 10:29:03.446388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.137 [2024-11-29 10:29:03.446494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.137 [2024-11-29 10:29:03.446515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:24.137 [2024-11-29 10:29:03.446528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:24.137 [2024-11-29 10:29:03.446540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.137 [2024-11-29 10:29:03.446672] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:24.137 [2024-11-29 10:29:03.446692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:24.137 [2024-11-29 10:29:03.446706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:24.137 [2024-11-29 10:29:03.446725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.137 [2024-11-29 10:29:03.446735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:24.137 [2024-11-29 10:29:03.446743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:24.137 [2024-11-29 10:29:03.446751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:24.137 [2024-11-29 10:29:03.446760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:24.137 [2024-11-29 10:29:03.446768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:24.137 [2024-11-29 10:29:03.446775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:24.137 [2024-11-29 10:29:03.446783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:24.137 [2024-11-29 10:29:03.446792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:24.137 [2024-11-29 10:29:03.446813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:24.137 [2024-11-29 10:29:03.446822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:24.137 [2024-11-29 10:29:03.446829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:24.137 [2024-11-29 10:29:03.446837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.137 [2024-11-29 10:29:03.446845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:24.137 [2024-11-29 10:29:03.446852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:24.137 [2024-11-29 10:29:03.446861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.137 [2024-11-29 10:29:03.446872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:24.137 [2024-11-29 10:29:03.446883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:24.137 [2024-11-29 10:29:03.446892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.137 [2024-11-29 10:29:03.446904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:24.137 [2024-11-29 10:29:03.446917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:24.137 [2024-11-29 10:29:03.446929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.137 [2024-11-29 10:29:03.446940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:24.137 [2024-11-29 10:29:03.446954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:24.137 [2024-11-29 10:29:03.446971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.137 [2024-11-29 10:29:03.446982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:24.137 [2024-11-29 10:29:03.446991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:24.137 [2024-11-29 10:29:03.447001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.137 [2024-11-29 10:29:03.447012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:24.137 [2024-11-29 10:29:03.447023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:24.137 [2024-11-29 10:29:03.447029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:24.137 [2024-11-29 10:29:03.447036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:24.137 [2024-11-29 10:29:03.447043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:24.138 [2024-11-29 10:29:03.447053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:24.138 [2024-11-29 10:29:03.447064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:24.138 [2024-11-29 10:29:03.447075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:24.138 [2024-11-29 10:29:03.447086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.138 [2024-11-29 10:29:03.447096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:24.138 [2024-11-29 10:29:03.447107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:24.138 [2024-11-29 10:29:03.447118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.138 [2024-11-29 10:29:03.447128] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:24.138 [2024-11-29 10:29:03.447140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:24.138 [2024-11-29 10:29:03.447147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:24.138 [2024-11-29 10:29:03.447156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.138 [2024-11-29 10:29:03.447171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:24.138 [2024-11-29 10:29:03.447181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:24.138 [2024-11-29 10:29:03.447190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:24.138 [2024-11-29 10:29:03.447197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:24.138 [2024-11-29 10:29:03.447208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:24.138 [2024-11-29 10:29:03.447219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:24.138 [2024-11-29 10:29:03.447232] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:24.138 [2024-11-29 10:29:03.447257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:24.138 [2024-11-29 10:29:03.447270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:24.138 [2024-11-29 10:29:03.447282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:24.138 [2024-11-29 10:29:03.447293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:24.138 [2024-11-29 10:29:03.447306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:24.138 [2024-11-29 10:29:03.447321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:24.138 [2024-11-29 10:29:03.447333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:24.138 [2024-11-29 10:29:03.447344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:24.138 [2024-11-29 10:29:03.447351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:24.138 [2024-11-29 10:29:03.447360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:24.138 [2024-11-29 10:29:03.447377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:24.138 [2024-11-29 10:29:03.447387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:24.138 [2024-11-29 10:29:03.447396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:24.138 [2024-11-29 10:29:03.447404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:24.138 [2024-11-29 10:29:03.447411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:24.138 [2024-11-29 10:29:03.447422] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:24.138 [2024-11-29 10:29:03.447435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:24.138 [2024-11-29 10:29:03.447447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:24.138 [2024-11-29 10:29:03.447455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:24.138 [2024-11-29 10:29:03.447469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:24.138 [2024-11-29 10:29:03.447480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:24.138 [2024-11-29 10:29:03.447495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.138 [2024-11-29 10:29:03.447510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:24.138 [2024-11-29 10:29:03.447527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:23:24.138 [2024-11-29 10:29:03.447539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.138 [2024-11-29 10:29:03.457781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.138 [2024-11-29 10:29:03.457832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:24.138 [2024-11-29 10:29:03.457843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.184 ms 00:23:24.138 [2024-11-29 10:29:03.457851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.138 [2024-11-29 10:29:03.457934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.138 [2024-11-29 10:29:03.457942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:24.138 [2024-11-29 10:29:03.457951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:24.138 [2024-11-29 10:29:03.457959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.138 [2024-11-29 10:29:03.477151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.138 [2024-11-29 10:29:03.477210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:24.138 [2024-11-29 10:29:03.477226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.137 ms 00:23:24.138 [2024-11-29 10:29:03.477237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.138 [2024-11-29 10:29:03.477288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.138 [2024-11-29 10:29:03.477301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:24.138 [2024-11-29 10:29:03.477313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:24.138 [2024-11-29 10:29:03.477324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.138 [2024-11-29 10:29:03.477821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.138 [2024-11-29 10:29:03.477860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:24.138 [2024-11-29 10:29:03.477875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:23:24.138 [2024-11-29 10:29:03.477887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.138 [2024-11-29 10:29:03.478084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.138 [2024-11-29 10:29:03.478099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:24.138 [2024-11-29 10:29:03.478112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:23:24.138 [2024-11-29 10:29:03.478129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.138 [2024-11-29 10:29:03.484677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.138 [2024-11-29 10:29:03.484717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:24.138 [2024-11-29 10:29:03.484733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.521 ms 00:23:24.138 [2024-11-29 10:29:03.484740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.138 [2024-11-29 10:29:03.487921] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:24.139 [2024-11-29 10:29:03.487964] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:24.139 [2024-11-29 10:29:03.487978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.139 [2024-11-29 10:29:03.487986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:24.139 [2024-11-29 10:29:03.487994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.127 ms 00:23:24.139 [2024-11-29 10:29:03.488001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.139 [2024-11-29 10:29:03.503273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.139 [2024-11-29 10:29:03.503318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:24.139 [2024-11-29 10:29:03.503329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.222 ms 00:23:24.139 [2024-11-29 10:29:03.503337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.139 [2024-11-29 10:29:03.505675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.139 [2024-11-29 10:29:03.505715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:24.139 [2024-11-29 10:29:03.505725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.295 ms 00:23:24.139 [2024-11-29 10:29:03.505732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.139 [2024-11-29 10:29:03.507725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.139 [2024-11-29 10:29:03.507768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:24.139 [2024-11-29 10:29:03.507777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.955 ms 00:23:24.139 [2024-11-29 10:29:03.507784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.139 [2024-11-29 10:29:03.508137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.139 [2024-11-29 10:29:03.508150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:24.139 [2024-11-29 10:29:03.508160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:23:24.139 [2024-11-29 10:29:03.508173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.139 [2024-11-29 10:29:03.529188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.139 [2024-11-29 10:29:03.529246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:24.139 [2024-11-29 10:29:03.529258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.992 ms 00:23:24.139 [2024-11-29 10:29:03.529267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.139 [2024-11-29 10:29:03.537728] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:24.139 [2024-11-29 10:29:03.540579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.139 [2024-11-29 10:29:03.540626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:24.139 [2024-11-29 10:29:03.540643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.263 ms 00:23:24.139 [2024-11-29 10:29:03.540654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.139 [2024-11-29 10:29:03.540746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.139 [2024-11-29 10:29:03.540758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:24.139 [2024-11-29 10:29:03.540774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:24.139 [2024-11-29 10:29:03.540783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.139 [2024-11-29 10:29:03.540872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.139 [2024-11-29 10:29:03.540889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:24.139 [2024-11-29 10:29:03.540898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:24.139 [2024-11-29 10:29:03.540906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.139 [2024-11-29 10:29:03.540929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.139 [2024-11-29 10:29:03.540941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:24.139 [2024-11-29 10:29:03.540950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:24.139 [2024-11-29 10:29:03.540957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.139 [2024-11-29 10:29:03.540996] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:24.139 [2024-11-29 10:29:03.541005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.139 [2024-11-29 10:29:03.541020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:24.139 [2024-11-29 10:29:03.541030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:24.139 [2024-11-29 10:29:03.541038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.139 [2024-11-29 10:29:03.546152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.139 [2024-11-29 10:29:03.546198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:24.139 [2024-11-29 10:29:03.546209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.093 ms 00:23:24.139 [2024-11-29 10:29:03.546218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.139 [2024-11-29 10:29:03.546306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.139 [2024-11-29 10:29:03.546317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:24.139 [2024-11-29 10:29:03.546326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:24.139 [2024-11-29 10:29:03.546340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.139 [2024-11-29 10:29:03.547474] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 113.477 ms, result 0 00:23:25.524  [2024-11-29T10:29:05.928Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-29T10:29:06.877Z] Copying: 48/1024 [MB] (31 MBps) [2024-11-29T10:29:07.822Z] Copying: 61/1024 [MB] (13 MBps) [2024-11-29T10:29:08.763Z] Copying: 72/1024 [MB] (10 MBps) [2024-11-29T10:29:09.698Z] Copying: 82/1024 [MB] (10 MBps) [2024-11-29T10:29:10.633Z] Copying: 118/1024 [MB] (35 MBps) [2024-11-29T10:29:11.574Z] Copying: 169/1024 [MB] (51 MBps) [2024-11-29T10:29:12.966Z] Copying: 201/1024 [MB] (31 MBps) [2024-11-29T10:29:13.909Z] Copying: 217/1024 [MB] (16 MBps) [2024-11-29T10:29:14.852Z] Copying: 229/1024 [MB] (12 MBps) [2024-11-29T10:29:15.791Z] Copying: 241/1024 [MB] (11 MBps) [2024-11-29T10:29:16.734Z] Copying: 255/1024 [MB] (13 MBps) [2024-11-29T10:29:17.678Z] Copying: 274/1024 [MB] (18 MBps) [2024-11-29T10:29:18.625Z] Copying: 291024/1048576 [kB] (10216 kBps) [2024-11-29T10:29:19.569Z] Copying: 294/1024 [MB] (10 MBps) [2024-11-29T10:29:20.955Z] Copying: 308/1024 [MB] (13 MBps) [2024-11-29T10:29:21.901Z] Copying: 323/1024 [MB] (15 MBps) [2024-11-29T10:29:22.846Z] Copying: 340/1024 [MB] (16 MBps) [2024-11-29T10:29:23.786Z] Copying: 351/1024 [MB] (11 MBps) [2024-11-29T10:29:24.719Z] Copying: 368/1024 [MB] (17 MBps) [2024-11-29T10:29:25.653Z] Copying: 399/1024 [MB] (30 MBps) [2024-11-29T10:29:26.595Z] Copying: 429/1024 [MB] (30 MBps) [2024-11-29T10:29:27.977Z] Copying: 446/1024 [MB] (16 MBps) [2024-11-29T10:29:28.917Z] Copying: 465/1024 [MB] (19 MBps) [2024-11-29T10:29:29.861Z] Copying: 485/1024 [MB] (20 MBps) [2024-11-29T10:29:30.805Z] Copying: 496/1024 [MB] (10 MBps) [2024-11-29T10:29:31.747Z] Copying: 509/1024 [MB] (13 MBps) [2024-11-29T10:29:32.733Z] Copying: 529/1024 [MB] (20 MBps) [2024-11-29T10:29:33.706Z] Copying: 547/1024 [MB] (17 MBps) [2024-11-29T10:29:34.641Z] Copying: 569/1024 [MB] (22 MBps) [2024-11-29T10:29:35.579Z] Copying: 593/1024 [MB] (24 MBps) [2024-11-29T10:29:37.002Z] Copying: 619/1024 [MB] (25 MBps) [2024-11-29T10:29:37.576Z] Copying: 634/1024 [MB] (15 MBps) [2024-11-29T10:29:38.968Z] Copying: 652/1024 [MB] (18 MBps) [2024-11-29T10:29:39.911Z] Copying: 668/1024 [MB] (15 MBps) [2024-11-29T10:29:40.858Z] Copying: 689/1024 [MB] (20 MBps) [2024-11-29T10:29:41.804Z] Copying: 710/1024 [MB] (21 MBps) [2024-11-29T10:29:42.749Z] Copying: 724/1024 [MB] (13 MBps) [2024-11-29T10:29:43.692Z] Copying: 751536/1048576 [kB] (9680 kBps) [2024-11-29T10:29:44.637Z] Copying: 744/1024 [MB] (10 MBps) [2024-11-29T10:29:45.583Z] Copying: 754/1024 [MB] (10 MBps) [2024-11-29T10:29:46.972Z] Copying: 764/1024 [MB] (10 MBps) [2024-11-29T10:29:47.920Z] Copying: 774/1024 [MB] (10 MBps) [2024-11-29T10:29:48.865Z] Copying: 784/1024 [MB] (10 MBps) [2024-11-29T10:29:49.812Z] Copying: 794/1024 [MB] (10 MBps) [2024-11-29T10:29:50.757Z] Copying: 804/1024 [MB] (10 MBps) [2024-11-29T10:29:51.708Z] Copying: 815/1024 [MB] (10 MBps) [2024-11-29T10:29:52.649Z] Copying: 844656/1048576 [kB] (10084 kBps) [2024-11-29T10:29:53.593Z] Copying: 835/1024 [MB] (10 MBps) [2024-11-29T10:29:54.984Z] Copying: 845/1024 [MB] (10 MBps) [2024-11-29T10:29:55.929Z] Copying: 856/1024 [MB] (10 MBps) [2024-11-29T10:29:56.872Z] Copying: 866/1024 [MB] (10 MBps) [2024-11-29T10:29:57.846Z] Copying: 876/1024 [MB] (10 MBps) [2024-11-29T10:29:58.791Z] Copying: 887/1024 [MB] (10 MBps) [2024-11-29T10:29:59.735Z] Copying: 897/1024 [MB] (10 MBps) [2024-11-29T10:30:00.678Z] Copying: 908/1024 [MB] (10 MBps) [2024-11-29T10:30:01.618Z] Copying: 918/1024 [MB] (10 MBps) [2024-11-29T10:30:02.560Z] Copying: 929/1024 [MB] (10 MBps) [2024-11-29T10:30:03.948Z] Copying: 961992/1048576 [kB] (10236 kBps) [2024-11-29T10:30:04.895Z] Copying: 949/1024 [MB] (10 MBps) [2024-11-29T10:30:05.840Z] Copying: 959/1024 [MB] (10 MBps) [2024-11-29T10:30:06.786Z] Copying: 992480/1048576 [kB] (9796 kBps) [2024-11-29T10:30:07.731Z] Copying: 979/1024 [MB] (10 MBps) [2024-11-29T10:30:08.777Z] Copying: 1012952/1048576 [kB] (10144 kBps) [2024-11-29T10:30:09.719Z] Copying: 1004/1024 [MB] (14 MBps) [2024-11-29T10:30:10.662Z] Copying: 1019/1024 [MB] (15 MBps) [2024-11-29T10:30:10.923Z] Copying: 1048308/1048576 [kB] (4228 kBps) [2024-11-29T10:30:10.923Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-29 10:30:10.807625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.458 [2024-11-29 10:30:10.807693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:31.458 [2024-11-29 10:30:10.807718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:31.458 [2024-11-29 10:30:10.807727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.458 [2024-11-29 10:30:10.810775] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:31.458 [2024-11-29 10:30:10.813599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.458 [2024-11-29 10:30:10.813646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:31.458 [2024-11-29 10:30:10.813663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.761 ms 00:24:31.458 [2024-11-29 10:30:10.813671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.458 [2024-11-29 10:30:10.826735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.458 [2024-11-29 10:30:10.826782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:31.458 [2024-11-29 10:30:10.826794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.193 ms 00:24:31.458 [2024-11-29 10:30:10.826813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.458 [2024-11-29 10:30:10.851405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.458 [2024-11-29 10:30:10.851452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:31.458 [2024-11-29 10:30:10.851477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.573 ms 00:24:31.458 [2024-11-29 10:30:10.851489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.458 [2024-11-29 10:30:10.857675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.458 [2024-11-29 10:30:10.857711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:31.458 [2024-11-29 10:30:10.857722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.149 ms 00:24:31.458 [2024-11-29 10:30:10.857731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.458 [2024-11-29 10:30:10.860573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.458 [2024-11-29 10:30:10.860617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:31.458 [2024-11-29 10:30:10.860627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.781 ms 00:24:31.458 [2024-11-29 10:30:10.860634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.458 [2024-11-29 10:30:10.865898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.458 [2024-11-29 10:30:10.865959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:31.458 [2024-11-29 10:30:10.865970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.226 ms 00:24:31.458 [2024-11-29 10:30:10.865988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.805 [2024-11-29 10:30:11.034184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.805 [2024-11-29 10:30:11.034240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:31.805 [2024-11-29 10:30:11.034253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 168.139 ms 00:24:31.805 [2024-11-29 10:30:11.034262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.805 [2024-11-29 10:30:11.036913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.805 [2024-11-29 10:30:11.036960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:31.805 [2024-11-29 10:30:11.036981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.634 ms 00:24:31.805 [2024-11-29 10:30:11.036989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.805 [2024-11-29 10:30:11.038986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.805 [2024-11-29 10:30:11.039035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:31.805 [2024-11-29 10:30:11.039045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.953 ms 00:24:31.805 [2024-11-29 10:30:11.039053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.805 [2024-11-29 10:30:11.040791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.805 [2024-11-29 10:30:11.040857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:31.805 [2024-11-29 10:30:11.040867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.643 ms 00:24:31.805 [2024-11-29 10:30:11.040874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.805 [2024-11-29 10:30:11.042632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.805 [2024-11-29 10:30:11.042684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:31.805 [2024-11-29 10:30:11.042695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.688 ms 00:24:31.805 [2024-11-29 10:30:11.042702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.805 [2024-11-29 10:30:11.042744] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:31.805 [2024-11-29 10:30:11.042759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 96768 / 261120 wr_cnt: 1 state: open 00:24:31.805 [2024-11-29 10:30:11.042770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.042997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:31.805 [2024-11-29 10:30:11.043617] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:31.805 [2024-11-29 10:30:11.043632] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b21615cc-511d-4102-af29-37fdb7e3e0e1 00:24:31.805 [2024-11-29 10:30:11.043642] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 96768 00:24:31.805 [2024-11-29 10:30:11.043654] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 97728 00:24:31.805 [2024-11-29 10:30:11.043666] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 96768 00:24:31.805 [2024-11-29 10:30:11.043676] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0099 00:24:31.805 [2024-11-29 10:30:11.043684] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:31.805 [2024-11-29 10:30:11.043698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:31.805 [2024-11-29 10:30:11.043706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:31.805 [2024-11-29 10:30:11.043713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:31.805 [2024-11-29 10:30:11.043720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:31.805 [2024-11-29 10:30:11.043727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.805 [2024-11-29 10:30:11.043735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:31.805 [2024-11-29 10:30:11.043749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:24:31.805 [2024-11-29 10:30:11.043757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.805 [2024-11-29 10:30:11.046342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.805 [2024-11-29 10:30:11.046385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:31.805 [2024-11-29 10:30:11.046398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.565 ms 00:24:31.805 [2024-11-29 10:30:11.046407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.805 [2024-11-29 10:30:11.046540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.805 [2024-11-29 10:30:11.046550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:31.805 [2024-11-29 10:30:11.046560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:24:31.805 [2024-11-29 10:30:11.046574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.805 [2024-11-29 10:30:11.055099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.805 [2024-11-29 10:30:11.055152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:31.806 [2024-11-29 10:30:11.055164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.806 [2024-11-29 10:30:11.055173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.806 [2024-11-29 10:30:11.055241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.806 [2024-11-29 10:30:11.055250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:31.806 [2024-11-29 10:30:11.055259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.806 [2024-11-29 10:30:11.055273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.806 [2024-11-29 10:30:11.055341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.806 [2024-11-29 10:30:11.055352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:31.806 [2024-11-29 10:30:11.055361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.806 [2024-11-29 10:30:11.055369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.806 [2024-11-29 10:30:11.055386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.806 [2024-11-29 10:30:11.055395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:31.806 [2024-11-29 10:30:11.055403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.806 [2024-11-29 10:30:11.055411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.806 [2024-11-29 10:30:11.070539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.806 [2024-11-29 10:30:11.070600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:31.806 [2024-11-29 10:30:11.070613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.806 [2024-11-29 10:30:11.070622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.806 [2024-11-29 10:30:11.082457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.806 [2024-11-29 10:30:11.082517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:31.806 [2024-11-29 10:30:11.082529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.806 [2024-11-29 10:30:11.082539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.806 [2024-11-29 10:30:11.082599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.806 [2024-11-29 10:30:11.082609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:31.806 [2024-11-29 10:30:11.082627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.806 [2024-11-29 10:30:11.082635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.806 [2024-11-29 10:30:11.082670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.806 [2024-11-29 10:30:11.082680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:31.806 [2024-11-29 10:30:11.082688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.806 [2024-11-29 10:30:11.082697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.806 [2024-11-29 10:30:11.082765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.806 [2024-11-29 10:30:11.082779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:31.806 [2024-11-29 10:30:11.082788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.806 [2024-11-29 10:30:11.082835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.806 [2024-11-29 10:30:11.082871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.806 [2024-11-29 10:30:11.082881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:31.806 [2024-11-29 10:30:11.082889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.806 [2024-11-29 10:30:11.082897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.806 [2024-11-29 10:30:11.082946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.806 [2024-11-29 10:30:11.082959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:31.806 [2024-11-29 10:30:11.082967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.806 [2024-11-29 10:30:11.082975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.806 [2024-11-29 10:30:11.083022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.806 [2024-11-29 10:30:11.083032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:31.806 [2024-11-29 10:30:11.083041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.806 [2024-11-29 10:30:11.083057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.806 [2024-11-29 10:30:11.083195] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 277.523 ms, result 0 00:24:32.065 00:24:32.065 00:24:32.065 10:30:11 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:32.065 [2024-11-29 10:30:11.462447] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:24:32.065 [2024-11-29 10:30:11.462597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90664 ] 00:24:32.325 [2024-11-29 10:30:11.609353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.325 [2024-11-29 10:30:11.639893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.325 [2024-11-29 10:30:11.758568] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:32.325 [2024-11-29 10:30:11.758653] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:32.587 [2024-11-29 10:30:11.920779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.587 [2024-11-29 10:30:11.920867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:32.587 [2024-11-29 10:30:11.920883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:32.587 [2024-11-29 10:30:11.920892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.587 [2024-11-29 10:30:11.920954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.587 [2024-11-29 10:30:11.920965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:32.587 [2024-11-29 10:30:11.920975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:32.587 [2024-11-29 10:30:11.920991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.587 [2024-11-29 10:30:11.921021] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:32.587 [2024-11-29 10:30:11.921332] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:32.587 [2024-11-29 10:30:11.921370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.587 [2024-11-29 10:30:11.921379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:32.587 [2024-11-29 10:30:11.921391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:24:32.587 [2024-11-29 10:30:11.921403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.587 [2024-11-29 10:30:11.923301] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:32.587 [2024-11-29 10:30:11.927441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.587 [2024-11-29 10:30:11.927495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:32.587 [2024-11-29 10:30:11.927507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.142 ms 00:24:32.587 [2024-11-29 10:30:11.927526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.587 [2024-11-29 10:30:11.927610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.587 [2024-11-29 10:30:11.927620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:32.587 [2024-11-29 10:30:11.927631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:32.587 [2024-11-29 10:30:11.927643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.587 [2024-11-29 10:30:11.936520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.587 [2024-11-29 10:30:11.936566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:32.587 [2024-11-29 10:30:11.936585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.832 ms 00:24:32.587 [2024-11-29 10:30:11.936597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.587 [2024-11-29 10:30:11.936705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.587 [2024-11-29 10:30:11.936715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:32.587 [2024-11-29 10:30:11.936725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:24:32.587 [2024-11-29 10:30:11.936733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.587 [2024-11-29 10:30:11.936793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.587 [2024-11-29 10:30:11.936842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:32.587 [2024-11-29 10:30:11.936855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:32.587 [2024-11-29 10:30:11.936867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.587 [2024-11-29 10:30:11.936894] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:32.587 [2024-11-29 10:30:11.939169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.587 [2024-11-29 10:30:11.939212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:32.587 [2024-11-29 10:30:11.939222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.280 ms 00:24:32.587 [2024-11-29 10:30:11.939229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.587 [2024-11-29 10:30:11.939272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.587 [2024-11-29 10:30:11.939281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:32.587 [2024-11-29 10:30:11.939291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:32.587 [2024-11-29 10:30:11.939303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.587 [2024-11-29 10:30:11.939322] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:32.587 [2024-11-29 10:30:11.939345] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:32.587 [2024-11-29 10:30:11.939386] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:32.587 [2024-11-29 10:30:11.939407] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:32.587 [2024-11-29 10:30:11.939514] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:32.587 [2024-11-29 10:30:11.939530] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:32.587 [2024-11-29 10:30:11.939543] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:32.587 [2024-11-29 10:30:11.939557] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:32.587 [2024-11-29 10:30:11.939566] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:32.587 [2024-11-29 10:30:11.939576] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:32.587 [2024-11-29 10:30:11.939585] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:32.587 [2024-11-29 10:30:11.939593] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:32.587 [2024-11-29 10:30:11.939602] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:32.587 [2024-11-29 10:30:11.939610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.587 [2024-11-29 10:30:11.939621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:32.587 [2024-11-29 10:30:11.939632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:24:32.587 [2024-11-29 10:30:11.939639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.587 [2024-11-29 10:30:11.939727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.587 [2024-11-29 10:30:11.939746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:32.587 [2024-11-29 10:30:11.939754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:24:32.587 [2024-11-29 10:30:11.939762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.587 [2024-11-29 10:30:11.939887] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:32.587 [2024-11-29 10:30:11.939900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:32.587 [2024-11-29 10:30:11.939909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:32.587 [2024-11-29 10:30:11.939926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.587 [2024-11-29 10:30:11.939936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:32.587 [2024-11-29 10:30:11.939944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:32.587 [2024-11-29 10:30:11.939952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:32.587 [2024-11-29 10:30:11.939960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:32.587 [2024-11-29 10:30:11.939969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:32.587 [2024-11-29 10:30:11.939978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:32.587 [2024-11-29 10:30:11.939986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:32.587 [2024-11-29 10:30:11.939994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:32.588 [2024-11-29 10:30:11.940002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:32.588 [2024-11-29 10:30:11.940014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:32.588 [2024-11-29 10:30:11.940023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:32.588 [2024-11-29 10:30:11.940032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.588 [2024-11-29 10:30:11.940042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:32.588 [2024-11-29 10:30:11.940050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:32.588 [2024-11-29 10:30:11.940058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.588 [2024-11-29 10:30:11.940066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:32.588 [2024-11-29 10:30:11.940074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:32.588 [2024-11-29 10:30:11.940082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.588 [2024-11-29 10:30:11.940091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:32.588 [2024-11-29 10:30:11.940098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:32.588 [2024-11-29 10:30:11.940106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.588 [2024-11-29 10:30:11.940114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:32.588 [2024-11-29 10:30:11.940122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:32.588 [2024-11-29 10:30:11.940129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.588 [2024-11-29 10:30:11.940137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:32.588 [2024-11-29 10:30:11.940147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:32.588 [2024-11-29 10:30:11.940155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.588 [2024-11-29 10:30:11.940163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:32.588 [2024-11-29 10:30:11.940173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:32.588 [2024-11-29 10:30:11.940181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:32.588 [2024-11-29 10:30:11.940189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:32.588 [2024-11-29 10:30:11.940197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:32.588 [2024-11-29 10:30:11.940204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:32.588 [2024-11-29 10:30:11.940212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:32.588 [2024-11-29 10:30:11.940220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:32.588 [2024-11-29 10:30:11.940228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.588 [2024-11-29 10:30:11.940237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:32.588 [2024-11-29 10:30:11.940244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:32.588 [2024-11-29 10:30:11.940251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.588 [2024-11-29 10:30:11.940259] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:32.588 [2024-11-29 10:30:11.940270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:32.588 [2024-11-29 10:30:11.940281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:32.588 [2024-11-29 10:30:11.940290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.588 [2024-11-29 10:30:11.940299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:32.588 [2024-11-29 10:30:11.940308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:32.588 [2024-11-29 10:30:11.940316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:32.588 [2024-11-29 10:30:11.940325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:32.588 [2024-11-29 10:30:11.940332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:32.588 [2024-11-29 10:30:11.940338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:32.588 [2024-11-29 10:30:11.940347] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:32.588 [2024-11-29 10:30:11.940360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:32.588 [2024-11-29 10:30:11.940369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:32.588 [2024-11-29 10:30:11.940377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:32.588 [2024-11-29 10:30:11.940386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:32.588 [2024-11-29 10:30:11.940394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:32.588 [2024-11-29 10:30:11.940402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:32.588 [2024-11-29 10:30:11.940409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:32.588 [2024-11-29 10:30:11.940419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:32.588 [2024-11-29 10:30:11.940427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:32.588 [2024-11-29 10:30:11.940435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:32.588 [2024-11-29 10:30:11.940448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:32.588 [2024-11-29 10:30:11.940456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:32.588 [2024-11-29 10:30:11.940463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:32.588 [2024-11-29 10:30:11.940470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:32.588 [2024-11-29 10:30:11.940478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:32.588 [2024-11-29 10:30:11.940486] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:32.588 [2024-11-29 10:30:11.940494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:32.588 [2024-11-29 10:30:11.940502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:32.588 [2024-11-29 10:30:11.940509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:32.588 [2024-11-29 10:30:11.940516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:32.588 [2024-11-29 10:30:11.940523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:32.588 [2024-11-29 10:30:11.940532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.588 [2024-11-29 10:30:11.940543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:32.588 [2024-11-29 10:30:11.940553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:24:32.588 [2024-11-29 10:30:11.940563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.588 [2024-11-29 10:30:11.955942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.588 [2024-11-29 10:30:11.955993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:32.588 [2024-11-29 10:30:11.956011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.333 ms 00:24:32.588 [2024-11-29 10:30:11.956022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.589 [2024-11-29 10:30:11.956116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.589 [2024-11-29 10:30:11.956124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:32.589 [2024-11-29 10:30:11.956133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:32.589 [2024-11-29 10:30:11.956141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.589 [2024-11-29 10:30:11.985455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.589 [2024-11-29 10:30:11.985561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:32.589 [2024-11-29 10:30:11.985592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.244 ms 00:24:32.589 [2024-11-29 10:30:11.985615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.589 [2024-11-29 10:30:11.985731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.589 [2024-11-29 10:30:11.985759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:32.589 [2024-11-29 10:30:11.985782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:32.589 [2024-11-29 10:30:11.985835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.589 [2024-11-29 10:30:11.986682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.589 [2024-11-29 10:30:11.986731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:32.589 [2024-11-29 10:30:11.986742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:24:32.589 [2024-11-29 10:30:11.986750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.589 [2024-11-29 10:30:11.986934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.589 [2024-11-29 10:30:11.986946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:32.589 [2024-11-29 10:30:11.986958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:24:32.589 [2024-11-29 10:30:11.986967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.589 [2024-11-29 10:30:11.995387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.589 [2024-11-29 10:30:11.995443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:32.589 [2024-11-29 10:30:11.995454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.398 ms 00:24:32.589 [2024-11-29 10:30:11.995463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.589 [2024-11-29 10:30:11.999664] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:32.589 [2024-11-29 10:30:11.999716] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:32.589 [2024-11-29 10:30:11.999736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.589 [2024-11-29 10:30:11.999745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:32.589 [2024-11-29 10:30:11.999755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.165 ms 00:24:32.589 [2024-11-29 10:30:11.999764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.589 [2024-11-29 10:30:12.015883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.589 [2024-11-29 10:30:12.015936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:32.589 [2024-11-29 10:30:12.015950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.046 ms 00:24:32.589 [2024-11-29 10:30:12.015969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.589 [2024-11-29 10:30:12.018999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.589 [2024-11-29 10:30:12.019046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:32.589 [2024-11-29 10:30:12.019057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:24:32.589 [2024-11-29 10:30:12.019065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.589 [2024-11-29 10:30:12.021712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.589 [2024-11-29 10:30:12.021760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:32.589 [2024-11-29 10:30:12.021771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.600 ms 00:24:32.589 [2024-11-29 10:30:12.021779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.589 [2024-11-29 10:30:12.022200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.589 [2024-11-29 10:30:12.022227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:32.589 [2024-11-29 10:30:12.022239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:24:32.589 [2024-11-29 10:30:12.022252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.589 [2024-11-29 10:30:12.046543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.589 [2024-11-29 10:30:12.046607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:32.589 [2024-11-29 10:30:12.046622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.269 ms 00:24:32.589 [2024-11-29 10:30:12.046631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.850 [2024-11-29 10:30:12.054899] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:32.850 [2024-11-29 10:30:12.057889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.850 [2024-11-29 10:30:12.057925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:32.850 [2024-11-29 10:30:12.057938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.202 ms 00:24:32.850 [2024-11-29 10:30:12.057948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.850 [2024-11-29 10:30:12.058055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.850 [2024-11-29 10:30:12.058070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:32.850 [2024-11-29 10:30:12.058080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:32.850 [2024-11-29 10:30:12.058099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.850 [2024-11-29 10:30:12.059844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.850 [2024-11-29 10:30:12.059886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:32.850 [2024-11-29 10:30:12.059896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.700 ms 00:24:32.850 [2024-11-29 10:30:12.059904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.850 [2024-11-29 10:30:12.059945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.850 [2024-11-29 10:30:12.059954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:32.850 [2024-11-29 10:30:12.059964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:32.850 [2024-11-29 10:30:12.059976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.850 [2024-11-29 10:30:12.060012] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:32.850 [2024-11-29 10:30:12.060023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.850 [2024-11-29 10:30:12.060034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:32.850 [2024-11-29 10:30:12.060045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:32.850 [2024-11-29 10:30:12.060053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.850 [2024-11-29 10:30:12.065629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.850 [2024-11-29 10:30:12.065678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:32.850 [2024-11-29 10:30:12.065690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.556 ms 00:24:32.850 [2024-11-29 10:30:12.065698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.850 [2024-11-29 10:30:12.065784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.850 [2024-11-29 10:30:12.065813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:32.850 [2024-11-29 10:30:12.065823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:32.850 [2024-11-29 10:30:12.065834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.850 [2024-11-29 10:30:12.067032] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 145.766 ms, result 0 00:24:33.793  [2024-11-29T10:30:14.650Z] Copying: 6724/1048576 [kB] (6724 kBps) [2024-11-29T10:30:15.595Z] Copying: 18/1024 [MB] (11 MBps) [2024-11-29T10:30:16.539Z] Copying: 31/1024 [MB] (13 MBps) [2024-11-29T10:30:17.480Z] Copying: 49/1024 [MB] (17 MBps) [2024-11-29T10:30:18.423Z] Copying: 66/1024 [MB] (17 MBps) [2024-11-29T10:30:19.368Z] Copying: 83/1024 [MB] (16 MBps) [2024-11-29T10:30:20.312Z] Copying: 93/1024 [MB] (10 MBps) [2024-11-29T10:30:21.257Z] Copying: 106/1024 [MB] (13 MBps) [2024-11-29T10:30:22.645Z] Copying: 117/1024 [MB] (11 MBps) [2024-11-29T10:30:23.591Z] Copying: 136/1024 [MB] (18 MBps) [2024-11-29T10:30:24.539Z] Copying: 148/1024 [MB] (12 MBps) [2024-11-29T10:30:25.537Z] Copying: 159/1024 [MB] (10 MBps) [2024-11-29T10:30:26.482Z] Copying: 174/1024 [MB] (15 MBps) [2024-11-29T10:30:27.427Z] Copying: 188/1024 [MB] (13 MBps) [2024-11-29T10:30:28.373Z] Copying: 199/1024 [MB] (10 MBps) [2024-11-29T10:30:29.321Z] Copying: 213280/1048576 [kB] (9436 kBps) [2024-11-29T10:30:30.269Z] Copying: 222240/1048576 [kB] (8960 kBps) [2024-11-29T10:30:31.657Z] Copying: 231968/1048576 [kB] (9728 kBps) [2024-11-29T10:30:32.603Z] Copying: 236/1024 [MB] (10 MBps) [2024-11-29T10:30:33.548Z] Copying: 247/1024 [MB] (10 MBps) [2024-11-29T10:30:34.493Z] Copying: 258/1024 [MB] (10 MBps) [2024-11-29T10:30:35.437Z] Copying: 268/1024 [MB] (10 MBps) [2024-11-29T10:30:36.382Z] Copying: 279/1024 [MB] (10 MBps) [2024-11-29T10:30:37.400Z] Copying: 290/1024 [MB] (10 MBps) [2024-11-29T10:30:38.346Z] Copying: 301/1024 [MB] (10 MBps) [2024-11-29T10:30:39.289Z] Copying: 312/1024 [MB] (11 MBps) [2024-11-29T10:30:40.674Z] Copying: 324/1024 [MB] (12 MBps) [2024-11-29T10:30:41.618Z] Copying: 342/1024 [MB] (17 MBps) [2024-11-29T10:30:42.562Z] Copying: 359/1024 [MB] (16 MBps) [2024-11-29T10:30:43.507Z] Copying: 371/1024 [MB] (12 MBps) [2024-11-29T10:30:44.452Z] Copying: 383/1024 [MB] (12 MBps) [2024-11-29T10:30:45.396Z] Copying: 396/1024 [MB] (12 MBps) [2024-11-29T10:30:46.396Z] Copying: 407/1024 [MB] (11 MBps) [2024-11-29T10:30:47.341Z] Copying: 417/1024 [MB] (10 MBps) [2024-11-29T10:30:48.287Z] Copying: 428/1024 [MB] (10 MBps) [2024-11-29T10:30:49.678Z] Copying: 448672/1048576 [kB] (9896 kBps) [2024-11-29T10:30:50.268Z] Copying: 448/1024 [MB] (10 MBps) [2024-11-29T10:30:51.658Z] Copying: 469328/1048576 [kB] (9952 kBps) [2024-11-29T10:30:52.604Z] Copying: 479328/1048576 [kB] (10000 kBps) [2024-11-29T10:30:53.549Z] Copying: 479/1024 [MB] (11 MBps) [2024-11-29T10:30:54.496Z] Copying: 490/1024 [MB] (10 MBps) [2024-11-29T10:30:55.442Z] Copying: 500/1024 [MB] (10 MBps) [2024-11-29T10:30:56.383Z] Copying: 510/1024 [MB] (10 MBps) [2024-11-29T10:30:57.329Z] Copying: 521/1024 [MB] (10 MBps) [2024-11-29T10:30:58.275Z] Copying: 532/1024 [MB] (11 MBps) [2024-11-29T10:30:59.660Z] Copying: 543/1024 [MB] (10 MBps) [2024-11-29T10:31:00.604Z] Copying: 554/1024 [MB] (10 MBps) [2024-11-29T10:31:01.546Z] Copying: 564/1024 [MB] (10 MBps) [2024-11-29T10:31:02.511Z] Copying: 574/1024 [MB] (10 MBps) [2024-11-29T10:31:03.489Z] Copying: 584/1024 [MB] (10 MBps) [2024-11-29T10:31:04.434Z] Copying: 596/1024 [MB] (11 MBps) [2024-11-29T10:31:05.376Z] Copying: 607/1024 [MB] (11 MBps) [2024-11-29T10:31:06.321Z] Copying: 618/1024 [MB] (11 MBps) [2024-11-29T10:31:07.266Z] Copying: 643616/1048576 [kB] (9996 kBps) [2024-11-29T10:31:08.657Z] Copying: 653400/1048576 [kB] (9784 kBps) [2024-11-29T10:31:09.603Z] Copying: 663616/1048576 [kB] (10216 kBps) [2024-11-29T10:31:10.543Z] Copying: 673792/1048576 [kB] (10176 kBps) [2024-11-29T10:31:11.482Z] Copying: 675/1024 [MB] (17 MBps) [2024-11-29T10:31:12.424Z] Copying: 692/1024 [MB] (16 MBps) [2024-11-29T10:31:13.369Z] Copying: 715/1024 [MB] (23 MBps) [2024-11-29T10:31:14.315Z] Copying: 726/1024 [MB] (10 MBps) [2024-11-29T10:31:15.262Z] Copying: 743/1024 [MB] (17 MBps) [2024-11-29T10:31:16.265Z] Copying: 757/1024 [MB] (14 MBps) [2024-11-29T10:31:17.654Z] Copying: 775/1024 [MB] (17 MBps) [2024-11-29T10:31:18.598Z] Copying: 796/1024 [MB] (20 MBps) [2024-11-29T10:31:19.545Z] Copying: 811/1024 [MB] (15 MBps) [2024-11-29T10:31:20.491Z] Copying: 827/1024 [MB] (15 MBps) [2024-11-29T10:31:21.437Z] Copying: 848/1024 [MB] (20 MBps) [2024-11-29T10:31:22.378Z] Copying: 867/1024 [MB] (19 MBps) [2024-11-29T10:31:23.311Z] Copying: 884/1024 [MB] (16 MBps) [2024-11-29T10:31:24.686Z] Copying: 931/1024 [MB] (46 MBps) [2024-11-29T10:31:25.254Z] Copying: 978/1024 [MB] (47 MBps) [2024-11-29T10:31:25.514Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-29 10:31:25.409573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.049 [2024-11-29 10:31:25.409675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:46.049 [2024-11-29 10:31:25.409697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:46.049 [2024-11-29 10:31:25.409710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.049 [2024-11-29 10:31:25.409750] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:46.049 [2024-11-29 10:31:25.410478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.049 [2024-11-29 10:31:25.410517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:46.049 [2024-11-29 10:31:25.410537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:25:46.049 [2024-11-29 10:31:25.410551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.049 [2024-11-29 10:31:25.410923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.049 [2024-11-29 10:31:25.410947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:46.049 [2024-11-29 10:31:25.410961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:25:46.049 [2024-11-29 10:31:25.410973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.049 [2024-11-29 10:31:25.419344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.049 [2024-11-29 10:31:25.419392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:46.049 [2024-11-29 10:31:25.419407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.348 ms 00:25:46.049 [2024-11-29 10:31:25.419421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.049 [2024-11-29 10:31:25.426755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.049 [2024-11-29 10:31:25.426786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:46.049 [2024-11-29 10:31:25.426805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.285 ms 00:25:46.049 [2024-11-29 10:31:25.426814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.049 [2024-11-29 10:31:25.428133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.049 [2024-11-29 10:31:25.428168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:46.049 [2024-11-29 10:31:25.428177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:25:46.049 [2024-11-29 10:31:25.428185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.049 [2024-11-29 10:31:25.431814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.049 [2024-11-29 10:31:25.431844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:46.049 [2024-11-29 10:31:25.431855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.597 ms 00:25:46.049 [2024-11-29 10:31:25.431883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.049 [2024-11-29 10:31:25.488602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.049 [2024-11-29 10:31:25.488679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:46.049 [2024-11-29 10:31:25.488693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.677 ms 00:25:46.049 [2024-11-29 10:31:25.488704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.049 [2024-11-29 10:31:25.490790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.049 [2024-11-29 10:31:25.490841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:46.049 [2024-11-29 10:31:25.490851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.069 ms 00:25:46.049 [2024-11-29 10:31:25.490860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.049 [2024-11-29 10:31:25.492048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.049 [2024-11-29 10:31:25.492080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:46.049 [2024-11-29 10:31:25.492089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.156 ms 00:25:46.050 [2024-11-29 10:31:25.492097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.050 [2024-11-29 10:31:25.492983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.050 [2024-11-29 10:31:25.493014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:46.050 [2024-11-29 10:31:25.493023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:25:46.050 [2024-11-29 10:31:25.493030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.050 [2024-11-29 10:31:25.493913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.050 [2024-11-29 10:31:25.493942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:46.050 [2024-11-29 10:31:25.493951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:25:46.050 [2024-11-29 10:31:25.493959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.050 [2024-11-29 10:31:25.493995] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:46.050 [2024-11-29 10:31:25.494011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:46.050 [2024-11-29 10:31:25.494022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:46.050 [2024-11-29 10:31:25.494609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:46.051 [2024-11-29 10:31:25.494851] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:46.051 [2024-11-29 10:31:25.494859] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b21615cc-511d-4102-af29-37fdb7e3e0e1 00:25:46.051 [2024-11-29 10:31:25.494868] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:46.051 [2024-11-29 10:31:25.494880] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 35264 00:25:46.051 [2024-11-29 10:31:25.494890] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 34304 00:25:46.051 [2024-11-29 10:31:25.494900] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0280 00:25:46.051 [2024-11-29 10:31:25.494907] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:46.051 [2024-11-29 10:31:25.494916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:46.051 [2024-11-29 10:31:25.494932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:46.051 [2024-11-29 10:31:25.494940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:46.051 [2024-11-29 10:31:25.494947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:46.051 [2024-11-29 10:31:25.494954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.051 [2024-11-29 10:31:25.494963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:46.051 [2024-11-29 10:31:25.494975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:25:46.051 [2024-11-29 10:31:25.494983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.051 [2024-11-29 10:31:25.496852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.051 [2024-11-29 10:31:25.496877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:46.051 [2024-11-29 10:31:25.496888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.850 ms 00:25:46.051 [2024-11-29 10:31:25.496898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.051 [2024-11-29 10:31:25.496996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.051 [2024-11-29 10:31:25.497007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:46.051 [2024-11-29 10:31:25.497016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:25:46.051 [2024-11-29 10:31:25.497026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.051 [2024-11-29 10:31:25.503174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.051 [2024-11-29 10:31:25.503209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:46.051 [2024-11-29 10:31:25.503219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.051 [2024-11-29 10:31:25.503227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.051 [2024-11-29 10:31:25.503281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.051 [2024-11-29 10:31:25.503295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:46.051 [2024-11-29 10:31:25.503304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.051 [2024-11-29 10:31:25.503311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.051 [2024-11-29 10:31:25.503372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.051 [2024-11-29 10:31:25.503388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:46.051 [2024-11-29 10:31:25.503397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.051 [2024-11-29 10:31:25.503404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.051 [2024-11-29 10:31:25.503420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.051 [2024-11-29 10:31:25.503429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:46.051 [2024-11-29 10:31:25.503436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.051 [2024-11-29 10:31:25.503443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.309 [2024-11-29 10:31:25.515074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.310 [2024-11-29 10:31:25.515118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:46.310 [2024-11-29 10:31:25.515130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.310 [2024-11-29 10:31:25.515138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.310 [2024-11-29 10:31:25.524405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.310 [2024-11-29 10:31:25.524451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:46.310 [2024-11-29 10:31:25.524463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.310 [2024-11-29 10:31:25.524472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.310 [2024-11-29 10:31:25.524557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.310 [2024-11-29 10:31:25.524567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:46.310 [2024-11-29 10:31:25.524576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.310 [2024-11-29 10:31:25.524584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.310 [2024-11-29 10:31:25.524609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.310 [2024-11-29 10:31:25.524617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:46.310 [2024-11-29 10:31:25.524626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.310 [2024-11-29 10:31:25.524633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.310 [2024-11-29 10:31:25.524702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.310 [2024-11-29 10:31:25.524727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:46.310 [2024-11-29 10:31:25.524736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.310 [2024-11-29 10:31:25.524745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.310 [2024-11-29 10:31:25.524773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.310 [2024-11-29 10:31:25.524783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:46.310 [2024-11-29 10:31:25.524792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.310 [2024-11-29 10:31:25.524814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.310 [2024-11-29 10:31:25.524858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.310 [2024-11-29 10:31:25.524870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:46.310 [2024-11-29 10:31:25.524878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.310 [2024-11-29 10:31:25.524886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.310 [2024-11-29 10:31:25.524931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.310 [2024-11-29 10:31:25.524942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:46.310 [2024-11-29 10:31:25.524950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.310 [2024-11-29 10:31:25.524960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.310 [2024-11-29 10:31:25.525099] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 115.499 ms, result 0 00:25:46.310 00:25:46.310 00:25:46.310 10:31:25 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:48.842 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:48.842 10:31:27 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:48.842 10:31:27 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:48.842 10:31:27 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:48.842 10:31:28 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:48.842 10:31:28 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:48.842 Process with pid 88167 is not found 00:25:48.842 10:31:28 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 88167 00:25:48.842 10:31:28 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88167 ']' 00:25:48.842 10:31:28 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88167 00:25:48.842 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88167) - No such process 00:25:48.842 10:31:28 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 88167 is not found' 00:25:48.842 10:31:28 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:48.842 Remove shared memory files 00:25:48.842 10:31:28 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:48.842 10:31:28 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:48.842 10:31:28 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:48.842 10:31:28 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:48.842 10:31:28 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:48.842 10:31:28 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:48.842 00:25:48.842 real 5m17.869s 00:25:48.842 user 5m5.983s 00:25:48.842 sys 0m11.313s 00:25:48.842 10:31:28 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:48.842 ************************************ 00:25:48.842 END TEST ftl_restore 00:25:48.842 10:31:28 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:48.842 ************************************ 00:25:48.842 10:31:28 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:48.842 10:31:28 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:48.842 10:31:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:48.842 10:31:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:48.842 ************************************ 00:25:48.842 START TEST ftl_dirty_shutdown 00:25:48.842 ************************************ 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:48.842 * Looking for test storage... 00:25:48.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:48.842 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:48.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.843 --rc genhtml_branch_coverage=1 00:25:48.843 --rc genhtml_function_coverage=1 00:25:48.843 --rc genhtml_legend=1 00:25:48.843 --rc geninfo_all_blocks=1 00:25:48.843 --rc geninfo_unexecuted_blocks=1 00:25:48.843 00:25:48.843 ' 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:48.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.843 --rc genhtml_branch_coverage=1 00:25:48.843 --rc genhtml_function_coverage=1 00:25:48.843 --rc genhtml_legend=1 00:25:48.843 --rc geninfo_all_blocks=1 00:25:48.843 --rc geninfo_unexecuted_blocks=1 00:25:48.843 00:25:48.843 ' 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:48.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.843 --rc genhtml_branch_coverage=1 00:25:48.843 --rc genhtml_function_coverage=1 00:25:48.843 --rc genhtml_legend=1 00:25:48.843 --rc geninfo_all_blocks=1 00:25:48.843 --rc geninfo_unexecuted_blocks=1 00:25:48.843 00:25:48.843 ' 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:48.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.843 --rc genhtml_branch_coverage=1 00:25:48.843 --rc genhtml_function_coverage=1 00:25:48.843 --rc genhtml_legend=1 00:25:48.843 --rc geninfo_all_blocks=1 00:25:48.843 --rc geninfo_unexecuted_blocks=1 00:25:48.843 00:25:48.843 ' 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=91513 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 91513 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91513 ']' 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:48.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:48.843 10:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:48.843 [2024-11-29 10:31:28.298676] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:25:48.843 [2024-11-29 10:31:28.298821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91513 ] 00:25:49.131 [2024-11-29 10:31:28.445109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.131 [2024-11-29 10:31:28.470310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.699 10:31:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:49.699 10:31:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:49.699 10:31:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:49.699 10:31:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:49.699 10:31:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:49.699 10:31:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:49.699 10:31:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:49.699 10:31:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:49.959 10:31:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:49.959 10:31:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:49.959 10:31:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:49.959 10:31:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:49.959 10:31:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:49.959 10:31:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:49.959 10:31:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:49.959 10:31:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:50.218 10:31:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:50.218 { 00:25:50.218 "name": "nvme0n1", 00:25:50.218 "aliases": [ 00:25:50.218 "897d45ae-1f05-4113-8c65-150abb5b47c3" 00:25:50.218 ], 00:25:50.218 "product_name": "NVMe disk", 00:25:50.218 "block_size": 4096, 00:25:50.218 "num_blocks": 1310720, 00:25:50.218 "uuid": "897d45ae-1f05-4113-8c65-150abb5b47c3", 00:25:50.218 "numa_id": -1, 00:25:50.218 "assigned_rate_limits": { 00:25:50.218 "rw_ios_per_sec": 0, 00:25:50.218 "rw_mbytes_per_sec": 0, 00:25:50.218 "r_mbytes_per_sec": 0, 00:25:50.218 "w_mbytes_per_sec": 0 00:25:50.218 }, 00:25:50.218 "claimed": true, 00:25:50.218 "claim_type": "read_many_write_one", 00:25:50.218 "zoned": false, 00:25:50.218 "supported_io_types": { 00:25:50.218 "read": true, 00:25:50.218 "write": true, 00:25:50.218 "unmap": true, 00:25:50.218 "flush": true, 00:25:50.218 "reset": true, 00:25:50.218 "nvme_admin": true, 00:25:50.218 "nvme_io": true, 00:25:50.218 "nvme_io_md": false, 00:25:50.218 "write_zeroes": true, 00:25:50.218 "zcopy": false, 00:25:50.218 "get_zone_info": false, 00:25:50.218 "zone_management": false, 00:25:50.218 "zone_append": false, 00:25:50.218 "compare": true, 00:25:50.218 "compare_and_write": false, 00:25:50.218 "abort": true, 00:25:50.218 "seek_hole": false, 00:25:50.218 "seek_data": false, 00:25:50.218 "copy": true, 00:25:50.218 "nvme_iov_md": false 00:25:50.218 }, 00:25:50.218 "driver_specific": { 00:25:50.218 "nvme": [ 00:25:50.218 { 00:25:50.218 "pci_address": "0000:00:11.0", 00:25:50.218 "trid": { 00:25:50.218 "trtype": "PCIe", 00:25:50.218 "traddr": "0000:00:11.0" 00:25:50.218 }, 00:25:50.218 "ctrlr_data": { 00:25:50.218 "cntlid": 0, 00:25:50.218 "vendor_id": "0x1b36", 00:25:50.218 "model_number": "QEMU NVMe Ctrl", 00:25:50.218 "serial_number": "12341", 00:25:50.218 "firmware_revision": "8.0.0", 00:25:50.218 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:50.218 "oacs": { 00:25:50.218 "security": 0, 00:25:50.218 "format": 1, 00:25:50.218 "firmware": 0, 00:25:50.218 "ns_manage": 1 00:25:50.218 }, 00:25:50.218 "multi_ctrlr": false, 00:25:50.218 "ana_reporting": false 00:25:50.218 }, 00:25:50.218 "vs": { 00:25:50.218 "nvme_version": "1.4" 00:25:50.218 }, 00:25:50.218 "ns_data": { 00:25:50.218 "id": 1, 00:25:50.218 "can_share": false 00:25:50.218 } 00:25:50.218 } 00:25:50.218 ], 00:25:50.218 "mp_policy": "active_passive" 00:25:50.218 } 00:25:50.218 } 00:25:50.218 ]' 00:25:50.218 10:31:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:50.218 10:31:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:50.218 10:31:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:50.477 10:31:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:50.477 10:31:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:50.477 10:31:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:50.477 10:31:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:50.477 10:31:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:50.477 10:31:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:50.477 10:31:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:50.477 10:31:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:50.477 10:31:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=e20adc4a-8534-4c36-af50-0f3661835787 00:25:50.477 10:31:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:50.477 10:31:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e20adc4a-8534-4c36-af50-0f3661835787 00:25:50.735 10:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:50.994 10:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=1ba34245-3904-43ad-aa21-908d44dd6124 00:25:50.994 10:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1ba34245-3904-43ad-aa21-908d44dd6124 00:25:51.253 10:31:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=059f07ff-43c2-466d-a59e-b147fea4006b 00:25:51.253 10:31:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:51.253 10:31:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 059f07ff-43c2-466d-a59e-b147fea4006b 00:25:51.253 10:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:51.253 10:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:51.253 10:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=059f07ff-43c2-466d-a59e-b147fea4006b 00:25:51.253 10:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:51.253 10:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 059f07ff-43c2-466d-a59e-b147fea4006b 00:25:51.253 10:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=059f07ff-43c2-466d-a59e-b147fea4006b 00:25:51.253 10:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:51.253 10:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:51.253 10:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:51.253 10:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 059f07ff-43c2-466d-a59e-b147fea4006b 00:25:51.512 10:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:51.512 { 00:25:51.512 "name": "059f07ff-43c2-466d-a59e-b147fea4006b", 00:25:51.512 "aliases": [ 00:25:51.512 "lvs/nvme0n1p0" 00:25:51.512 ], 00:25:51.512 "product_name": "Logical Volume", 00:25:51.512 "block_size": 4096, 00:25:51.512 "num_blocks": 26476544, 00:25:51.512 "uuid": "059f07ff-43c2-466d-a59e-b147fea4006b", 00:25:51.512 "assigned_rate_limits": { 00:25:51.512 "rw_ios_per_sec": 0, 00:25:51.512 "rw_mbytes_per_sec": 0, 00:25:51.512 "r_mbytes_per_sec": 0, 00:25:51.512 "w_mbytes_per_sec": 0 00:25:51.512 }, 00:25:51.512 "claimed": false, 00:25:51.512 "zoned": false, 00:25:51.512 "supported_io_types": { 00:25:51.512 "read": true, 00:25:51.512 "write": true, 00:25:51.512 "unmap": true, 00:25:51.512 "flush": false, 00:25:51.512 "reset": true, 00:25:51.512 "nvme_admin": false, 00:25:51.512 "nvme_io": false, 00:25:51.512 "nvme_io_md": false, 00:25:51.512 "write_zeroes": true, 00:25:51.512 "zcopy": false, 00:25:51.512 "get_zone_info": false, 00:25:51.512 "zone_management": false, 00:25:51.512 "zone_append": false, 00:25:51.512 "compare": false, 00:25:51.512 "compare_and_write": false, 00:25:51.512 "abort": false, 00:25:51.512 "seek_hole": true, 00:25:51.512 "seek_data": true, 00:25:51.512 "copy": false, 00:25:51.512 "nvme_iov_md": false 00:25:51.512 }, 00:25:51.512 "driver_specific": { 00:25:51.512 "lvol": { 00:25:51.512 "lvol_store_uuid": "1ba34245-3904-43ad-aa21-908d44dd6124", 00:25:51.512 "base_bdev": "nvme0n1", 00:25:51.512 "thin_provision": true, 00:25:51.512 "num_allocated_clusters": 0, 00:25:51.512 "snapshot": false, 00:25:51.512 "clone": false, 00:25:51.512 "esnap_clone": false 00:25:51.512 } 00:25:51.512 } 00:25:51.512 } 00:25:51.512 ]' 00:25:51.512 10:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:51.512 10:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:51.512 10:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:51.512 10:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:51.512 10:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:51.512 10:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:51.512 10:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:51.512 10:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:51.512 10:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:51.771 10:31:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:51.771 10:31:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:51.771 10:31:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 059f07ff-43c2-466d-a59e-b147fea4006b 00:25:51.771 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=059f07ff-43c2-466d-a59e-b147fea4006b 00:25:51.771 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:51.771 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:51.771 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:51.771 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 059f07ff-43c2-466d-a59e-b147fea4006b 00:25:52.029 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:52.029 { 00:25:52.029 "name": "059f07ff-43c2-466d-a59e-b147fea4006b", 00:25:52.029 "aliases": [ 00:25:52.029 "lvs/nvme0n1p0" 00:25:52.029 ], 00:25:52.029 "product_name": "Logical Volume", 00:25:52.029 "block_size": 4096, 00:25:52.029 "num_blocks": 26476544, 00:25:52.029 "uuid": "059f07ff-43c2-466d-a59e-b147fea4006b", 00:25:52.029 "assigned_rate_limits": { 00:25:52.029 "rw_ios_per_sec": 0, 00:25:52.029 "rw_mbytes_per_sec": 0, 00:25:52.029 "r_mbytes_per_sec": 0, 00:25:52.029 "w_mbytes_per_sec": 0 00:25:52.029 }, 00:25:52.029 "claimed": false, 00:25:52.029 "zoned": false, 00:25:52.029 "supported_io_types": { 00:25:52.029 "read": true, 00:25:52.029 "write": true, 00:25:52.029 "unmap": true, 00:25:52.029 "flush": false, 00:25:52.029 "reset": true, 00:25:52.029 "nvme_admin": false, 00:25:52.029 "nvme_io": false, 00:25:52.029 "nvme_io_md": false, 00:25:52.029 "write_zeroes": true, 00:25:52.029 "zcopy": false, 00:25:52.029 "get_zone_info": false, 00:25:52.029 "zone_management": false, 00:25:52.029 "zone_append": false, 00:25:52.029 "compare": false, 00:25:52.029 "compare_and_write": false, 00:25:52.029 "abort": false, 00:25:52.029 "seek_hole": true, 00:25:52.029 "seek_data": true, 00:25:52.029 "copy": false, 00:25:52.029 "nvme_iov_md": false 00:25:52.029 }, 00:25:52.029 "driver_specific": { 00:25:52.029 "lvol": { 00:25:52.029 "lvol_store_uuid": "1ba34245-3904-43ad-aa21-908d44dd6124", 00:25:52.029 "base_bdev": "nvme0n1", 00:25:52.029 "thin_provision": true, 00:25:52.029 "num_allocated_clusters": 0, 00:25:52.029 "snapshot": false, 00:25:52.029 "clone": false, 00:25:52.029 "esnap_clone": false 00:25:52.029 } 00:25:52.029 } 00:25:52.029 } 00:25:52.029 ]' 00:25:52.029 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:52.029 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:52.029 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:52.029 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:52.029 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:52.029 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:52.029 10:31:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:52.029 10:31:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:52.288 10:31:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:52.288 10:31:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 059f07ff-43c2-466d-a59e-b147fea4006b 00:25:52.288 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=059f07ff-43c2-466d-a59e-b147fea4006b 00:25:52.288 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:52.288 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:52.288 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:52.288 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 059f07ff-43c2-466d-a59e-b147fea4006b 00:25:52.288 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:52.288 { 00:25:52.288 "name": "059f07ff-43c2-466d-a59e-b147fea4006b", 00:25:52.288 "aliases": [ 00:25:52.288 "lvs/nvme0n1p0" 00:25:52.288 ], 00:25:52.288 "product_name": "Logical Volume", 00:25:52.288 "block_size": 4096, 00:25:52.288 "num_blocks": 26476544, 00:25:52.288 "uuid": "059f07ff-43c2-466d-a59e-b147fea4006b", 00:25:52.288 "assigned_rate_limits": { 00:25:52.288 "rw_ios_per_sec": 0, 00:25:52.288 "rw_mbytes_per_sec": 0, 00:25:52.288 "r_mbytes_per_sec": 0, 00:25:52.288 "w_mbytes_per_sec": 0 00:25:52.288 }, 00:25:52.288 "claimed": false, 00:25:52.288 "zoned": false, 00:25:52.288 "supported_io_types": { 00:25:52.288 "read": true, 00:25:52.288 "write": true, 00:25:52.288 "unmap": true, 00:25:52.288 "flush": false, 00:25:52.288 "reset": true, 00:25:52.288 "nvme_admin": false, 00:25:52.288 "nvme_io": false, 00:25:52.288 "nvme_io_md": false, 00:25:52.288 "write_zeroes": true, 00:25:52.288 "zcopy": false, 00:25:52.288 "get_zone_info": false, 00:25:52.288 "zone_management": false, 00:25:52.288 "zone_append": false, 00:25:52.288 "compare": false, 00:25:52.288 "compare_and_write": false, 00:25:52.288 "abort": false, 00:25:52.288 "seek_hole": true, 00:25:52.288 "seek_data": true, 00:25:52.288 "copy": false, 00:25:52.288 "nvme_iov_md": false 00:25:52.288 }, 00:25:52.288 "driver_specific": { 00:25:52.288 "lvol": { 00:25:52.288 "lvol_store_uuid": "1ba34245-3904-43ad-aa21-908d44dd6124", 00:25:52.288 "base_bdev": "nvme0n1", 00:25:52.288 "thin_provision": true, 00:25:52.288 "num_allocated_clusters": 0, 00:25:52.288 "snapshot": false, 00:25:52.288 "clone": false, 00:25:52.288 "esnap_clone": false 00:25:52.288 } 00:25:52.288 } 00:25:52.288 } 00:25:52.288 ]' 00:25:52.288 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:52.548 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:52.548 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:52.548 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:52.548 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:52.548 10:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:52.548 10:31:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:52.548 10:31:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 059f07ff-43c2-466d-a59e-b147fea4006b --l2p_dram_limit 10' 00:25:52.548 10:31:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:52.548 10:31:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:52.548 10:31:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:52.548 10:31:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 059f07ff-43c2-466d-a59e-b147fea4006b --l2p_dram_limit 10 -c nvc0n1p0 00:25:52.548 [2024-11-29 10:31:31.983995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.548 [2024-11-29 10:31:31.984062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:52.548 [2024-11-29 10:31:31.984075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:52.548 [2024-11-29 10:31:31.984084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.548 [2024-11-29 10:31:31.984138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.548 [2024-11-29 10:31:31.984150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:52.548 [2024-11-29 10:31:31.984156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:52.548 [2024-11-29 10:31:31.984166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.548 [2024-11-29 10:31:31.984182] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:52.548 [2024-11-29 10:31:31.984439] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:52.548 [2024-11-29 10:31:31.984458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.548 [2024-11-29 10:31:31.984467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:52.548 [2024-11-29 10:31:31.984478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:25:52.548 [2024-11-29 10:31:31.984488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.548 [2024-11-29 10:31:31.984591] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 99d4e7ab-8983-46a9-bd57-28b2bf7a4dea 00:25:52.548 [2024-11-29 10:31:31.985894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.548 [2024-11-29 10:31:31.985927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:52.548 [2024-11-29 10:31:31.985937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:52.548 [2024-11-29 10:31:31.985945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.548 [2024-11-29 10:31:31.992757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.548 [2024-11-29 10:31:31.992784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:52.548 [2024-11-29 10:31:31.992797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.766 ms 00:25:52.548 [2024-11-29 10:31:31.992815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.548 [2024-11-29 10:31:31.992885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.548 [2024-11-29 10:31:31.992894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:52.548 [2024-11-29 10:31:31.992902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:52.548 [2024-11-29 10:31:31.992908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.548 [2024-11-29 10:31:31.992963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.548 [2024-11-29 10:31:31.992974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:52.548 [2024-11-29 10:31:31.992983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:52.548 [2024-11-29 10:31:31.992992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.548 [2024-11-29 10:31:31.993013] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:52.548 [2024-11-29 10:31:31.994645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.548 [2024-11-29 10:31:31.994674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:52.548 [2024-11-29 10:31:31.994682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.639 ms 00:25:52.548 [2024-11-29 10:31:31.994690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.548 [2024-11-29 10:31:31.994720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.548 [2024-11-29 10:31:31.994730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:52.548 [2024-11-29 10:31:31.994737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:52.548 [2024-11-29 10:31:31.994747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.548 [2024-11-29 10:31:31.994761] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:52.548 [2024-11-29 10:31:31.994889] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:52.548 [2024-11-29 10:31:31.994905] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:52.549 [2024-11-29 10:31:31.994917] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:52.549 [2024-11-29 10:31:31.994929] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:52.549 [2024-11-29 10:31:31.994942] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:52.549 [2024-11-29 10:31:31.994949] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:52.549 [2024-11-29 10:31:31.994960] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:52.549 [2024-11-29 10:31:31.994966] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:52.549 [2024-11-29 10:31:31.994974] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:52.549 [2024-11-29 10:31:31.994980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.549 [2024-11-29 10:31:31.994988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:52.549 [2024-11-29 10:31:31.994995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:25:52.549 [2024-11-29 10:31:31.995002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.549 [2024-11-29 10:31:31.995067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.549 [2024-11-29 10:31:31.995085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:52.549 [2024-11-29 10:31:31.995092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:52.549 [2024-11-29 10:31:31.995102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.549 [2024-11-29 10:31:31.995178] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:52.549 [2024-11-29 10:31:31.995190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:52.549 [2024-11-29 10:31:31.995197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:52.549 [2024-11-29 10:31:31.995206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.549 [2024-11-29 10:31:31.995212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:52.549 [2024-11-29 10:31:31.995218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:52.549 [2024-11-29 10:31:31.995225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:52.549 [2024-11-29 10:31:31.995231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:52.549 [2024-11-29 10:31:31.995237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:52.549 [2024-11-29 10:31:31.995244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:52.549 [2024-11-29 10:31:31.995249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:52.549 [2024-11-29 10:31:31.995258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:52.549 [2024-11-29 10:31:31.995263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:52.549 [2024-11-29 10:31:31.995271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:52.549 [2024-11-29 10:31:31.995277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:52.549 [2024-11-29 10:31:31.995283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.549 [2024-11-29 10:31:31.995289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:52.549 [2024-11-29 10:31:31.995297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:52.549 [2024-11-29 10:31:31.995303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.549 [2024-11-29 10:31:31.995312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:52.549 [2024-11-29 10:31:31.995317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:52.549 [2024-11-29 10:31:31.995325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.549 [2024-11-29 10:31:31.995330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:52.549 [2024-11-29 10:31:31.995338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:52.549 [2024-11-29 10:31:31.995343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.549 [2024-11-29 10:31:31.995351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:52.549 [2024-11-29 10:31:31.995357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:52.549 [2024-11-29 10:31:31.995365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.549 [2024-11-29 10:31:31.995370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:52.549 [2024-11-29 10:31:31.995379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:52.549 [2024-11-29 10:31:31.995385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.549 [2024-11-29 10:31:31.995392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:52.549 [2024-11-29 10:31:31.995399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:52.549 [2024-11-29 10:31:31.995407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:52.549 [2024-11-29 10:31:31.995414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:52.549 [2024-11-29 10:31:31.995421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:52.549 [2024-11-29 10:31:31.995427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:52.549 [2024-11-29 10:31:31.995436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:52.549 [2024-11-29 10:31:31.995442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:52.549 [2024-11-29 10:31:31.995450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.549 [2024-11-29 10:31:31.995455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:52.549 [2024-11-29 10:31:31.995461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:52.549 [2024-11-29 10:31:31.995466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.549 [2024-11-29 10:31:31.995473] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:52.549 [2024-11-29 10:31:31.995484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:52.549 [2024-11-29 10:31:31.995492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:52.549 [2024-11-29 10:31:31.995498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.549 [2024-11-29 10:31:31.995507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:52.549 [2024-11-29 10:31:31.995512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:52.549 [2024-11-29 10:31:31.995518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:52.549 [2024-11-29 10:31:31.995523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:52.549 [2024-11-29 10:31:31.995529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:52.549 [2024-11-29 10:31:31.995534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:52.549 [2024-11-29 10:31:31.995543] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:52.549 [2024-11-29 10:31:31.995551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:52.549 [2024-11-29 10:31:31.995559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:52.549 [2024-11-29 10:31:31.995565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:52.549 [2024-11-29 10:31:31.995571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:52.549 [2024-11-29 10:31:31.995577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:52.549 [2024-11-29 10:31:31.995584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:52.549 [2024-11-29 10:31:31.995590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:52.549 [2024-11-29 10:31:31.995598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:52.549 [2024-11-29 10:31:31.995605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:52.549 [2024-11-29 10:31:31.995612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:52.549 [2024-11-29 10:31:31.995619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:52.549 [2024-11-29 10:31:31.995626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:52.549 [2024-11-29 10:31:31.995632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:52.549 [2024-11-29 10:31:31.995639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:52.549 [2024-11-29 10:31:31.995645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:52.549 [2024-11-29 10:31:31.995652] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:52.549 [2024-11-29 10:31:31.995659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:52.549 [2024-11-29 10:31:31.995666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:52.549 [2024-11-29 10:31:31.995671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:52.549 [2024-11-29 10:31:31.995678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:52.549 [2024-11-29 10:31:31.995684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:52.549 [2024-11-29 10:31:31.995690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.549 [2024-11-29 10:31:31.995696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:52.549 [2024-11-29 10:31:31.995710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:25:52.549 [2024-11-29 10:31:31.995716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.549 [2024-11-29 10:31:31.995750] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:52.549 [2024-11-29 10:31:31.995763] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:55.081 [2024-11-29 10:31:34.299399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.299504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:55.081 [2024-11-29 10:31:34.299536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2303.626 ms 00:25:55.081 [2024-11-29 10:31:34.299554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.312872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.312917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:55.081 [2024-11-29 10:31:34.312939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.170 ms 00:25:55.081 [2024-11-29 10:31:34.312949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.313048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.313057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:55.081 [2024-11-29 10:31:34.313090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:55.081 [2024-11-29 10:31:34.313098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.323992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.324032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:55.081 [2024-11-29 10:31:34.324046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.832 ms 00:25:55.081 [2024-11-29 10:31:34.324057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.324089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.324098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:55.081 [2024-11-29 10:31:34.324108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:55.081 [2024-11-29 10:31:34.324116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.324557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.324582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:55.081 [2024-11-29 10:31:34.324593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:25:55.081 [2024-11-29 10:31:34.324601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.324726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.324743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:55.081 [2024-11-29 10:31:34.324755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:25:55.081 [2024-11-29 10:31:34.324764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.331847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.331879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:55.081 [2024-11-29 10:31:34.331891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.060 ms 00:25:55.081 [2024-11-29 10:31:34.331900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.352713] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:55.081 [2024-11-29 10:31:34.356909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.356958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:55.081 [2024-11-29 10:31:34.356977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.943 ms 00:25:55.081 [2024-11-29 10:31:34.356994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.407861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.407908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:55.081 [2024-11-29 10:31:34.407920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.816 ms 00:25:55.081 [2024-11-29 10:31:34.407933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.408126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.408140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:55.081 [2024-11-29 10:31:34.408153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:25:55.081 [2024-11-29 10:31:34.408167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.411394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.411432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:55.081 [2024-11-29 10:31:34.411445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.196 ms 00:25:55.081 [2024-11-29 10:31:34.411456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.414004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.414040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:55.081 [2024-11-29 10:31:34.414051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.514 ms 00:25:55.081 [2024-11-29 10:31:34.414062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.414367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.414392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:55.081 [2024-11-29 10:31:34.414402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:25:55.081 [2024-11-29 10:31:34.414414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.444237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.444277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:55.081 [2024-11-29 10:31:34.444291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.804 ms 00:25:55.081 [2024-11-29 10:31:34.444306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.448521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.448559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:55.081 [2024-11-29 10:31:34.448570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.183 ms 00:25:55.081 [2024-11-29 10:31:34.448582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.451814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.451849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:55.081 [2024-11-29 10:31:34.451858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.210 ms 00:25:55.081 [2024-11-29 10:31:34.451867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.455288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.455328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:55.081 [2024-11-29 10:31:34.455339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.401 ms 00:25:55.081 [2024-11-29 10:31:34.455353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.455380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.081 [2024-11-29 10:31:34.455392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:55.081 [2024-11-29 10:31:34.455403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:55.081 [2024-11-29 10:31:34.455413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.081 [2024-11-29 10:31:34.455494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.082 [2024-11-29 10:31:34.455507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:55.082 [2024-11-29 10:31:34.455515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:55.082 [2024-11-29 10:31:34.455527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.082 [2024-11-29 10:31:34.456531] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2472.089 ms, result 0 00:25:55.082 { 00:25:55.082 "name": "ftl0", 00:25:55.082 "uuid": "99d4e7ab-8983-46a9-bd57-28b2bf7a4dea" 00:25:55.082 } 00:25:55.082 10:31:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:55.082 10:31:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:55.339 10:31:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:55.339 10:31:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:55.339 10:31:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:55.598 /dev/nbd0 00:25:55.598 10:31:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:55.598 10:31:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:55.598 10:31:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:55.598 10:31:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:55.598 10:31:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:55.598 10:31:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:55.598 10:31:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:55.598 10:31:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:55.598 10:31:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:55.598 10:31:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:55.598 1+0 records in 00:25:55.598 1+0 records out 00:25:55.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261841 s, 15.6 MB/s 00:25:55.598 10:31:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:55.598 10:31:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:55.598 10:31:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:55.598 10:31:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:55.598 10:31:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:55.598 10:31:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:55.598 [2024-11-29 10:31:34.982280] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:25:55.598 [2024-11-29 10:31:34.982383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91638 ] 00:25:55.857 [2024-11-29 10:31:35.132741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.857 [2024-11-29 10:31:35.150996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.815  [2024-11-29T10:31:37.215Z] Copying: 196/1024 [MB] (196 MBps) [2024-11-29T10:31:38.591Z] Copying: 393/1024 [MB] (197 MBps) [2024-11-29T10:31:39.525Z] Copying: 590/1024 [MB] (197 MBps) [2024-11-29T10:31:40.091Z] Copying: 836/1024 [MB] (246 MBps) [2024-11-29T10:31:40.091Z] Copying: 1024/1024 [MB] (average 216 MBps) 00:26:00.626 00:26:00.626 10:31:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:02.529 10:31:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:02.788 [2024-11-29 10:31:42.010132] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:26:02.788 [2024-11-29 10:31:42.010277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91716 ] 00:26:02.788 [2024-11-29 10:31:42.150735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.788 [2024-11-29 10:31:42.168460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.162  [2024-11-29T10:31:44.560Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-29T10:31:45.494Z] Copying: 57/1024 [MB] (29 MBps) [2024-11-29T10:31:46.427Z] Copying: 88/1024 [MB] (31 MBps) [2024-11-29T10:31:47.361Z] Copying: 119/1024 [MB] (30 MBps) [2024-11-29T10:31:48.296Z] Copying: 146/1024 [MB] (27 MBps) [2024-11-29T10:31:49.231Z] Copying: 175/1024 [MB] (28 MBps) [2024-11-29T10:31:50.604Z] Copying: 203/1024 [MB] (28 MBps) [2024-11-29T10:31:51.539Z] Copying: 234/1024 [MB] (31 MBps) [2024-11-29T10:31:52.486Z] Copying: 263/1024 [MB] (29 MBps) [2024-11-29T10:31:53.418Z] Copying: 296/1024 [MB] (32 MBps) [2024-11-29T10:31:54.348Z] Copying: 331/1024 [MB] (35 MBps) [2024-11-29T10:31:55.285Z] Copying: 362/1024 [MB] (31 MBps) [2024-11-29T10:31:56.216Z] Copying: 392/1024 [MB] (29 MBps) [2024-11-29T10:31:57.588Z] Copying: 421/1024 [MB] (29 MBps) [2024-11-29T10:31:58.521Z] Copying: 454/1024 [MB] (33 MBps) [2024-11-29T10:31:59.456Z] Copying: 484/1024 [MB] (30 MBps) [2024-11-29T10:32:00.390Z] Copying: 516/1024 [MB] (31 MBps) [2024-11-29T10:32:01.324Z] Copying: 550/1024 [MB] (33 MBps) [2024-11-29T10:32:02.257Z] Copying: 579/1024 [MB] (29 MBps) [2024-11-29T10:32:03.632Z] Copying: 612/1024 [MB] (33 MBps) [2024-11-29T10:32:04.218Z] Copying: 642/1024 [MB] (30 MBps) [2024-11-29T10:32:05.594Z] Copying: 674/1024 [MB] (31 MBps) [2024-11-29T10:32:06.529Z] Copying: 705/1024 [MB] (31 MBps) [2024-11-29T10:32:07.465Z] Copying: 740/1024 [MB] (35 MBps) [2024-11-29T10:32:08.400Z] Copying: 773/1024 [MB] (32 MBps) [2024-11-29T10:32:09.333Z] Copying: 804/1024 [MB] (30 MBps) [2024-11-29T10:32:10.267Z] Copying: 834/1024 [MB] (29 MBps) [2024-11-29T10:32:11.640Z] Copying: 865/1024 [MB] (31 MBps) [2024-11-29T10:32:12.574Z] Copying: 896/1024 [MB] (31 MBps) [2024-11-29T10:32:13.513Z] Copying: 926/1024 [MB] (29 MBps) [2024-11-29T10:32:14.520Z] Copying: 957/1024 [MB] (31 MBps) [2024-11-29T10:32:15.516Z] Copying: 986/1024 [MB] (28 MBps) [2024-11-29T10:32:15.774Z] Copying: 1014/1024 [MB] (28 MBps) [2024-11-29T10:32:15.774Z] Copying: 1024/1024 [MB] (average 30 MBps) 00:26:36.309 00:26:36.309 10:32:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:36.309 10:32:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:36.568 10:32:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:36.829 [2024-11-29 10:32:16.116338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.829 [2024-11-29 10:32:16.116414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:36.829 [2024-11-29 10:32:16.116432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:36.829 [2024-11-29 10:32:16.116442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.829 [2024-11-29 10:32:16.116471] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:36.829 [2024-11-29 10:32:16.117275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.829 [2024-11-29 10:32:16.117331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:36.829 [2024-11-29 10:32:16.117348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:26:36.829 [2024-11-29 10:32:16.117360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.829 [2024-11-29 10:32:16.120236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.829 [2024-11-29 10:32:16.120287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:36.829 [2024-11-29 10:32:16.120299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.842 ms 00:26:36.829 [2024-11-29 10:32:16.120310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.829 [2024-11-29 10:32:16.138195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.829 [2024-11-29 10:32:16.138256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:36.829 [2024-11-29 10:32:16.138272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.865 ms 00:26:36.829 [2024-11-29 10:32:16.138288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.829 [2024-11-29 10:32:16.144484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.829 [2024-11-29 10:32:16.144529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:36.829 [2024-11-29 10:32:16.144542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.153 ms 00:26:36.829 [2024-11-29 10:32:16.144554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.829 [2024-11-29 10:32:16.147262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.829 [2024-11-29 10:32:16.147326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:36.829 [2024-11-29 10:32:16.147336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.620 ms 00:26:36.829 [2024-11-29 10:32:16.147347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.829 [2024-11-29 10:32:16.153773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.829 [2024-11-29 10:32:16.153845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:36.829 [2024-11-29 10:32:16.153857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.382 ms 00:26:36.829 [2024-11-29 10:32:16.153867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.829 [2024-11-29 10:32:16.154010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.829 [2024-11-29 10:32:16.154024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:36.829 [2024-11-29 10:32:16.154033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:26:36.829 [2024-11-29 10:32:16.154052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.829 [2024-11-29 10:32:16.157289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.829 [2024-11-29 10:32:16.157341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:36.829 [2024-11-29 10:32:16.157352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.218 ms 00:26:36.829 [2024-11-29 10:32:16.157362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.829 [2024-11-29 10:32:16.160446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.829 [2024-11-29 10:32:16.160501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:36.829 [2024-11-29 10:32:16.160510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.040 ms 00:26:36.829 [2024-11-29 10:32:16.160520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.829 [2024-11-29 10:32:16.162672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.829 [2024-11-29 10:32:16.162727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:36.829 [2024-11-29 10:32:16.162736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.111 ms 00:26:36.829 [2024-11-29 10:32:16.162746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.829 [2024-11-29 10:32:16.165095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.829 [2024-11-29 10:32:16.165147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:36.829 [2024-11-29 10:32:16.165157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.260 ms 00:26:36.829 [2024-11-29 10:32:16.165167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.829 [2024-11-29 10:32:16.165208] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:36.829 [2024-11-29 10:32:16.165229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:36.829 [2024-11-29 10:32:16.165624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.165993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:36.830 [2024-11-29 10:32:16.166171] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:36.830 [2024-11-29 10:32:16.166179] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 99d4e7ab-8983-46a9-bd57-28b2bf7a4dea 00:26:36.830 [2024-11-29 10:32:16.166190] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:36.830 [2024-11-29 10:32:16.166203] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:36.830 [2024-11-29 10:32:16.166213] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:36.830 [2024-11-29 10:32:16.166222] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:36.830 [2024-11-29 10:32:16.166231] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:36.830 [2024-11-29 10:32:16.166240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:36.830 [2024-11-29 10:32:16.166249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:36.830 [2024-11-29 10:32:16.166256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:36.830 [2024-11-29 10:32:16.166271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:36.830 [2024-11-29 10:32:16.166278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.830 [2024-11-29 10:32:16.166289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:36.830 [2024-11-29 10:32:16.166301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:26:36.830 [2024-11-29 10:32:16.166310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.830 [2024-11-29 10:32:16.168833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.830 [2024-11-29 10:32:16.169015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:36.830 [2024-11-29 10:32:16.169034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.503 ms 00:26:36.830 [2024-11-29 10:32:16.169045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.830 [2024-11-29 10:32:16.169167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.830 [2024-11-29 10:32:16.169182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:36.830 [2024-11-29 10:32:16.169191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:26:36.830 [2024-11-29 10:32:16.169207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.830 [2024-11-29 10:32:16.177195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.830 [2024-11-29 10:32:16.177249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:36.830 [2024-11-29 10:32:16.177259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.830 [2024-11-29 10:32:16.177269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.830 [2024-11-29 10:32:16.177340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.830 [2024-11-29 10:32:16.177354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:36.830 [2024-11-29 10:32:16.177366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.830 [2024-11-29 10:32:16.177375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.830 [2024-11-29 10:32:16.177434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.830 [2024-11-29 10:32:16.177450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:36.830 [2024-11-29 10:32:16.177457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.830 [2024-11-29 10:32:16.177467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.830 [2024-11-29 10:32:16.177485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.830 [2024-11-29 10:32:16.177495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:36.830 [2024-11-29 10:32:16.177505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.830 [2024-11-29 10:32:16.177515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.830 [2024-11-29 10:32:16.192038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.830 [2024-11-29 10:32:16.192276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:36.830 [2024-11-29 10:32:16.192297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.830 [2024-11-29 10:32:16.192308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.830 [2024-11-29 10:32:16.203897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.830 [2024-11-29 10:32:16.203962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:36.830 [2024-11-29 10:32:16.203974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.830 [2024-11-29 10:32:16.203985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.830 [2024-11-29 10:32:16.204068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.830 [2024-11-29 10:32:16.204085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:36.830 [2024-11-29 10:32:16.204094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.830 [2024-11-29 10:32:16.204104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.830 [2024-11-29 10:32:16.204198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.830 [2024-11-29 10:32:16.204211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:36.830 [2024-11-29 10:32:16.204219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.830 [2024-11-29 10:32:16.204232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.830 [2024-11-29 10:32:16.204309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.830 [2024-11-29 10:32:16.204324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:36.830 [2024-11-29 10:32:16.204333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.830 [2024-11-29 10:32:16.204343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.830 [2024-11-29 10:32:16.204375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.830 [2024-11-29 10:32:16.204388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:36.830 [2024-11-29 10:32:16.204395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.830 [2024-11-29 10:32:16.204406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.830 [2024-11-29 10:32:16.204451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.830 [2024-11-29 10:32:16.204465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:36.830 [2024-11-29 10:32:16.204474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.830 [2024-11-29 10:32:16.204484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.830 [2024-11-29 10:32:16.204539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.830 [2024-11-29 10:32:16.204553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:36.830 [2024-11-29 10:32:16.204563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.830 [2024-11-29 10:32:16.204577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.830 [2024-11-29 10:32:16.204730] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 88.350 ms, result 0 00:26:36.830 true 00:26:36.830 10:32:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 91513 00:26:36.830 10:32:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid91513 00:26:36.830 10:32:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:37.093 [2024-11-29 10:32:16.300765] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:26:37.093 [2024-11-29 10:32:16.300931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92076 ] 00:26:37.093 [2024-11-29 10:32:16.448347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.093 [2024-11-29 10:32:16.480925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.473  [2024-11-29T10:32:18.874Z] Copying: 204/1024 [MB] (204 MBps) [2024-11-29T10:32:19.808Z] Copying: 465/1024 [MB] (261 MBps) [2024-11-29T10:32:20.744Z] Copying: 725/1024 [MB] (260 MBps) [2024-11-29T10:32:20.744Z] Copying: 983/1024 [MB] (257 MBps) [2024-11-29T10:32:21.004Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:26:41.539 00:26:41.539 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 91513 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:41.539 10:32:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:41.539 [2024-11-29 10:32:20.932738] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:26:41.539 [2024-11-29 10:32:20.933045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92124 ] 00:26:41.797 [2024-11-29 10:32:21.074084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.797 [2024-11-29 10:32:21.094717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.797 [2024-11-29 10:32:21.178113] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:41.797 [2024-11-29 10:32:21.178169] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:41.797 [2024-11-29 10:32:21.240014] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:41.797 [2024-11-29 10:32:21.240343] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:41.797 [2024-11-29 10:32:21.240611] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:42.056 [2024-11-29 10:32:21.435940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.056 [2024-11-29 10:32:21.435983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:42.056 [2024-11-29 10:32:21.435993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:42.056 [2024-11-29 10:32:21.436003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.056 [2024-11-29 10:32:21.436042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.056 [2024-11-29 10:32:21.436049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:42.056 [2024-11-29 10:32:21.436058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:42.056 [2024-11-29 10:32:21.436066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.056 [2024-11-29 10:32:21.436080] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:42.056 [2024-11-29 10:32:21.436259] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:42.056 [2024-11-29 10:32:21.436275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.056 [2024-11-29 10:32:21.436283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:42.056 [2024-11-29 10:32:21.436291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:26:42.056 [2024-11-29 10:32:21.436297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.056 [2024-11-29 10:32:21.437179] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:42.056 [2024-11-29 10:32:21.439042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.056 [2024-11-29 10:32:21.439070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:42.056 [2024-11-29 10:32:21.439078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.864 ms 00:26:42.056 [2024-11-29 10:32:21.439084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.057 [2024-11-29 10:32:21.439123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.057 [2024-11-29 10:32:21.439130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:42.057 [2024-11-29 10:32:21.439138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:42.057 [2024-11-29 10:32:21.439144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.057 [2024-11-29 10:32:21.443271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.057 [2024-11-29 10:32:21.443297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:42.057 [2024-11-29 10:32:21.443305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.097 ms 00:26:42.057 [2024-11-29 10:32:21.443311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.057 [2024-11-29 10:32:21.443377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.057 [2024-11-29 10:32:21.443384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:42.057 [2024-11-29 10:32:21.443391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:42.057 [2024-11-29 10:32:21.443398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.057 [2024-11-29 10:32:21.443445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.057 [2024-11-29 10:32:21.443460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:42.057 [2024-11-29 10:32:21.443469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:42.057 [2024-11-29 10:32:21.443477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.057 [2024-11-29 10:32:21.443493] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:42.057 [2024-11-29 10:32:21.444615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.057 [2024-11-29 10:32:21.444643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:42.057 [2024-11-29 10:32:21.444650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:26:42.057 [2024-11-29 10:32:21.444657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.057 [2024-11-29 10:32:21.444679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.057 [2024-11-29 10:32:21.444686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:42.057 [2024-11-29 10:32:21.444692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:42.057 [2024-11-29 10:32:21.444698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.057 [2024-11-29 10:32:21.444716] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:42.057 [2024-11-29 10:32:21.444730] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:42.057 [2024-11-29 10:32:21.444760] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:42.057 [2024-11-29 10:32:21.444773] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:42.057 [2024-11-29 10:32:21.444866] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:42.057 [2024-11-29 10:32:21.444876] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:42.057 [2024-11-29 10:32:21.444884] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:42.057 [2024-11-29 10:32:21.444892] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:42.057 [2024-11-29 10:32:21.444902] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:42.057 [2024-11-29 10:32:21.444909] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:42.057 [2024-11-29 10:32:21.444914] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:42.057 [2024-11-29 10:32:21.444922] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:42.057 [2024-11-29 10:32:21.444930] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:42.057 [2024-11-29 10:32:21.444936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.057 [2024-11-29 10:32:21.444943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:42.057 [2024-11-29 10:32:21.444948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:26:42.057 [2024-11-29 10:32:21.444956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.057 [2024-11-29 10:32:21.445021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.057 [2024-11-29 10:32:21.445028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:42.057 [2024-11-29 10:32:21.445034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:42.057 [2024-11-29 10:32:21.445042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.057 [2024-11-29 10:32:21.445120] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:42.057 [2024-11-29 10:32:21.445129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:42.057 [2024-11-29 10:32:21.445135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:42.057 [2024-11-29 10:32:21.445141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.057 [2024-11-29 10:32:21.445147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:42.057 [2024-11-29 10:32:21.445152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:42.057 [2024-11-29 10:32:21.445157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:42.057 [2024-11-29 10:32:21.445162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:42.057 [2024-11-29 10:32:21.445167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:42.057 [2024-11-29 10:32:21.445172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:42.057 [2024-11-29 10:32:21.445177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:42.057 [2024-11-29 10:32:21.445184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:42.057 [2024-11-29 10:32:21.445192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:42.057 [2024-11-29 10:32:21.445198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:42.057 [2024-11-29 10:32:21.445203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:42.057 [2024-11-29 10:32:21.445208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.057 [2024-11-29 10:32:21.445213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:42.057 [2024-11-29 10:32:21.445218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:42.057 [2024-11-29 10:32:21.445223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.057 [2024-11-29 10:32:21.445228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:42.057 [2024-11-29 10:32:21.445233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:42.057 [2024-11-29 10:32:21.445238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.057 [2024-11-29 10:32:21.445242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:42.057 [2024-11-29 10:32:21.445248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:42.057 [2024-11-29 10:32:21.445252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.057 [2024-11-29 10:32:21.445257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:42.057 [2024-11-29 10:32:21.445263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:42.057 [2024-11-29 10:32:21.445268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.057 [2024-11-29 10:32:21.445278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:42.057 [2024-11-29 10:32:21.445283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:42.057 [2024-11-29 10:32:21.445289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.057 [2024-11-29 10:32:21.445294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:42.057 [2024-11-29 10:32:21.445300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:42.057 [2024-11-29 10:32:21.445305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:42.057 [2024-11-29 10:32:21.445311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:42.057 [2024-11-29 10:32:21.445317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:42.057 [2024-11-29 10:32:21.445323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:42.057 [2024-11-29 10:32:21.445329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:42.057 [2024-11-29 10:32:21.445334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:42.057 [2024-11-29 10:32:21.445340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.057 [2024-11-29 10:32:21.445346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:42.057 [2024-11-29 10:32:21.445351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:42.057 [2024-11-29 10:32:21.445357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.057 [2024-11-29 10:32:21.445363] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:42.057 [2024-11-29 10:32:21.445373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:42.057 [2024-11-29 10:32:21.445380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:42.057 [2024-11-29 10:32:21.445386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.057 [2024-11-29 10:32:21.445392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:42.057 [2024-11-29 10:32:21.445399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:42.057 [2024-11-29 10:32:21.445405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:42.057 [2024-11-29 10:32:21.445411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:42.057 [2024-11-29 10:32:21.445416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:42.057 [2024-11-29 10:32:21.445422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:42.057 [2024-11-29 10:32:21.445428] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:42.057 [2024-11-29 10:32:21.445436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:42.057 [2024-11-29 10:32:21.445443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:42.057 [2024-11-29 10:32:21.445449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:42.057 [2024-11-29 10:32:21.445455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:42.058 [2024-11-29 10:32:21.445461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:42.058 [2024-11-29 10:32:21.445468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:42.058 [2024-11-29 10:32:21.445479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:42.058 [2024-11-29 10:32:21.445486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:42.058 [2024-11-29 10:32:21.445492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:42.058 [2024-11-29 10:32:21.445498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:42.058 [2024-11-29 10:32:21.445504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:42.058 [2024-11-29 10:32:21.445510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:42.058 [2024-11-29 10:32:21.445516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:42.058 [2024-11-29 10:32:21.445522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:42.058 [2024-11-29 10:32:21.445529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:42.058 [2024-11-29 10:32:21.445534] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:42.058 [2024-11-29 10:32:21.445543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:42.058 [2024-11-29 10:32:21.445553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:42.058 [2024-11-29 10:32:21.445559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:42.058 [2024-11-29 10:32:21.445565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:42.058 [2024-11-29 10:32:21.445571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:42.058 [2024-11-29 10:32:21.445578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.058 [2024-11-29 10:32:21.445586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:42.058 [2024-11-29 10:32:21.445592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:26:42.058 [2024-11-29 10:32:21.445603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.058 [2024-11-29 10:32:21.453722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.058 [2024-11-29 10:32:21.453754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:42.058 [2024-11-29 10:32:21.453765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.081 ms 00:26:42.058 [2024-11-29 10:32:21.453772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.058 [2024-11-29 10:32:21.453846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.058 [2024-11-29 10:32:21.453855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:42.058 [2024-11-29 10:32:21.453864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:42.058 [2024-11-29 10:32:21.453870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.058 [2024-11-29 10:32:21.479636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.058 [2024-11-29 10:32:21.479681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:42.058 [2024-11-29 10:32:21.479694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.725 ms 00:26:42.058 [2024-11-29 10:32:21.479702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.058 [2024-11-29 10:32:21.479753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.058 [2024-11-29 10:32:21.479764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:42.058 [2024-11-29 10:32:21.479772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:42.058 [2024-11-29 10:32:21.479780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.058 [2024-11-29 10:32:21.480140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.058 [2024-11-29 10:32:21.480170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:42.058 [2024-11-29 10:32:21.480179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:26:42.058 [2024-11-29 10:32:21.480187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.058 [2024-11-29 10:32:21.480313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.058 [2024-11-29 10:32:21.480335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:42.058 [2024-11-29 10:32:21.480345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:26:42.058 [2024-11-29 10:32:21.480354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.058 [2024-11-29 10:32:21.485274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.058 [2024-11-29 10:32:21.485303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:42.058 [2024-11-29 10:32:21.485312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.901 ms 00:26:42.058 [2024-11-29 10:32:21.485326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.058 [2024-11-29 10:32:21.487371] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:42.058 [2024-11-29 10:32:21.487400] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:42.058 [2024-11-29 10:32:21.487413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.058 [2024-11-29 10:32:21.487421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:42.058 [2024-11-29 10:32:21.487429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.005 ms 00:26:42.058 [2024-11-29 10:32:21.487436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.058 [2024-11-29 10:32:21.499231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.058 [2024-11-29 10:32:21.499266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:42.058 [2024-11-29 10:32:21.499284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.695 ms 00:26:42.058 [2024-11-29 10:32:21.499291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.058 [2024-11-29 10:32:21.501004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.058 [2024-11-29 10:32:21.501026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:42.058 [2024-11-29 10:32:21.501033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.667 ms 00:26:42.058 [2024-11-29 10:32:21.501039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.058 [2024-11-29 10:32:21.502089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.058 [2024-11-29 10:32:21.502110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:42.058 [2024-11-29 10:32:21.502117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:26:42.058 [2024-11-29 10:32:21.502122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.058 [2024-11-29 10:32:21.502385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.058 [2024-11-29 10:32:21.502400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:42.058 [2024-11-29 10:32:21.502407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:26:42.058 [2024-11-29 10:32:21.502412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.058 [2024-11-29 10:32:21.515596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.058 [2024-11-29 10:32:21.515631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:42.058 [2024-11-29 10:32:21.515640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.170 ms 00:26:42.058 [2024-11-29 10:32:21.515646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.317 [2024-11-29 10:32:21.521355] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:42.317 [2024-11-29 10:32:21.523170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.317 [2024-11-29 10:32:21.523192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:42.317 [2024-11-29 10:32:21.523207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.492 ms 00:26:42.317 [2024-11-29 10:32:21.523214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.317 [2024-11-29 10:32:21.523257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.317 [2024-11-29 10:32:21.523272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:42.317 [2024-11-29 10:32:21.523282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:42.317 [2024-11-29 10:32:21.523288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.317 [2024-11-29 10:32:21.523341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.317 [2024-11-29 10:32:21.523349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:42.317 [2024-11-29 10:32:21.523357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:42.317 [2024-11-29 10:32:21.523366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.317 [2024-11-29 10:32:21.523382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.317 [2024-11-29 10:32:21.523390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:42.317 [2024-11-29 10:32:21.523397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:42.317 [2024-11-29 10:32:21.523405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.318 [2024-11-29 10:32:21.523430] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:42.318 [2024-11-29 10:32:21.523439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.318 [2024-11-29 10:32:21.523445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:42.318 [2024-11-29 10:32:21.523454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:42.318 [2024-11-29 10:32:21.523460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.318 [2024-11-29 10:32:21.526189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.318 [2024-11-29 10:32:21.526216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:42.318 [2024-11-29 10:32:21.526226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.714 ms 00:26:42.318 [2024-11-29 10:32:21.526233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.318 [2024-11-29 10:32:21.526298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.318 [2024-11-29 10:32:21.526307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:42.318 [2024-11-29 10:32:21.526314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:42.318 [2024-11-29 10:32:21.526321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.318 [2024-11-29 10:32:21.527080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 90.800 ms, result 0 00:26:43.253  [2024-11-29T10:32:23.663Z] Copying: 46/1024 [MB] (46 MBps) [2024-11-29T10:32:24.607Z] Copying: 69/1024 [MB] (22 MBps) [2024-11-29T10:32:25.557Z] Copying: 82/1024 [MB] (13 MBps) [2024-11-29T10:32:26.943Z] Copying: 98/1024 [MB] (15 MBps) [2024-11-29T10:32:27.884Z] Copying: 113/1024 [MB] (14 MBps) [2024-11-29T10:32:28.848Z] Copying: 133/1024 [MB] (20 MBps) [2024-11-29T10:32:29.792Z] Copying: 148/1024 [MB] (14 MBps) [2024-11-29T10:32:30.736Z] Copying: 163/1024 [MB] (14 MBps) [2024-11-29T10:32:31.677Z] Copying: 176/1024 [MB] (13 MBps) [2024-11-29T10:32:32.621Z] Copying: 188/1024 [MB] (11 MBps) [2024-11-29T10:32:33.565Z] Copying: 200/1024 [MB] (11 MBps) [2024-11-29T10:32:34.945Z] Copying: 215/1024 [MB] (14 MBps) [2024-11-29T10:32:35.881Z] Copying: 232/1024 [MB] (17 MBps) [2024-11-29T10:32:36.867Z] Copying: 246/1024 [MB] (13 MBps) [2024-11-29T10:32:37.812Z] Copying: 264/1024 [MB] (17 MBps) [2024-11-29T10:32:38.753Z] Copying: 283/1024 [MB] (19 MBps) [2024-11-29T10:32:39.695Z] Copying: 304/1024 [MB] (20 MBps) [2024-11-29T10:32:40.638Z] Copying: 325/1024 [MB] (21 MBps) [2024-11-29T10:32:41.578Z] Copying: 337/1024 [MB] (12 MBps) [2024-11-29T10:32:42.961Z] Copying: 353/1024 [MB] (16 MBps) [2024-11-29T10:32:43.905Z] Copying: 365/1024 [MB] (11 MBps) [2024-11-29T10:32:44.846Z] Copying: 376/1024 [MB] (11 MBps) [2024-11-29T10:32:45.832Z] Copying: 400/1024 [MB] (23 MBps) [2024-11-29T10:32:46.777Z] Copying: 414/1024 [MB] (14 MBps) [2024-11-29T10:32:47.722Z] Copying: 425/1024 [MB] (11 MBps) [2024-11-29T10:32:48.665Z] Copying: 437/1024 [MB] (11 MBps) [2024-11-29T10:32:49.608Z] Copying: 453/1024 [MB] (16 MBps) [2024-11-29T10:32:50.550Z] Copying: 464/1024 [MB] (10 MBps) [2024-11-29T10:32:51.934Z] Copying: 475/1024 [MB] (11 MBps) [2024-11-29T10:32:52.876Z] Copying: 489/1024 [MB] (13 MBps) [2024-11-29T10:32:53.835Z] Copying: 506/1024 [MB] (17 MBps) [2024-11-29T10:32:54.770Z] Copying: 546/1024 [MB] (39 MBps) [2024-11-29T10:32:55.704Z] Copying: 595/1024 [MB] (48 MBps) [2024-11-29T10:32:56.636Z] Copying: 634/1024 [MB] (38 MBps) [2024-11-29T10:32:57.571Z] Copying: 675/1024 [MB] (41 MBps) [2024-11-29T10:32:58.949Z] Copying: 701/1024 [MB] (26 MBps) [2024-11-29T10:32:59.883Z] Copying: 739/1024 [MB] (37 MBps) [2024-11-29T10:33:00.826Z] Copying: 765/1024 [MB] (25 MBps) [2024-11-29T10:33:01.768Z] Copying: 804/1024 [MB] (38 MBps) [2024-11-29T10:33:02.709Z] Copying: 817/1024 [MB] (13 MBps) [2024-11-29T10:33:03.652Z] Copying: 833/1024 [MB] (15 MBps) [2024-11-29T10:33:04.597Z] Copying: 844/1024 [MB] (10 MBps) [2024-11-29T10:33:05.541Z] Copying: 860/1024 [MB] (15 MBps) [2024-11-29T10:33:06.929Z] Copying: 879/1024 [MB] (18 MBps) [2024-11-29T10:33:07.875Z] Copying: 899/1024 [MB] (20 MBps) [2024-11-29T10:33:08.817Z] Copying: 922/1024 [MB] (22 MBps) [2024-11-29T10:33:09.791Z] Copying: 937/1024 [MB] (15 MBps) [2024-11-29T10:33:10.743Z] Copying: 961/1024 [MB] (23 MBps) [2024-11-29T10:33:11.689Z] Copying: 986/1024 [MB] (24 MBps) [2024-11-29T10:33:12.635Z] Copying: 1003/1024 [MB] (17 MBps) [2024-11-29T10:33:13.581Z] Copying: 1023/1024 [MB] (19 MBps) [2024-11-29T10:33:13.581Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-29 10:33:13.436010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.116 [2024-11-29 10:33:13.436073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:34.116 [2024-11-29 10:33:13.436089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:34.116 [2024-11-29 10:33:13.436098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.116 [2024-11-29 10:33:13.439217] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:34.116 [2024-11-29 10:33:13.442596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.116 [2024-11-29 10:33:13.442630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:34.116 [2024-11-29 10:33:13.442640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.338 ms 00:27:34.116 [2024-11-29 10:33:13.442648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.116 [2024-11-29 10:33:13.452996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.116 [2024-11-29 10:33:13.453029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:34.116 [2024-11-29 10:33:13.453039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.405 ms 00:27:34.116 [2024-11-29 10:33:13.453047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.116 [2024-11-29 10:33:13.470663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.116 [2024-11-29 10:33:13.470696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:34.116 [2024-11-29 10:33:13.470707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.597 ms 00:27:34.116 [2024-11-29 10:33:13.470722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.116 [2024-11-29 10:33:13.476863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.116 [2024-11-29 10:33:13.476890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:34.116 [2024-11-29 10:33:13.476901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.116 ms 00:27:34.116 [2024-11-29 10:33:13.476909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.116 [2024-11-29 10:33:13.478002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.116 [2024-11-29 10:33:13.478032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:34.116 [2024-11-29 10:33:13.478042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:27:34.116 [2024-11-29 10:33:13.478049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.116 [2024-11-29 10:33:13.481725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.116 [2024-11-29 10:33:13.481756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:34.116 [2024-11-29 10:33:13.481766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.648 ms 00:27:34.116 [2024-11-29 10:33:13.481775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.116 [2024-11-29 10:33:13.525655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.116 [2024-11-29 10:33:13.525701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:34.116 [2024-11-29 10:33:13.525714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.840 ms 00:27:34.116 [2024-11-29 10:33:13.525722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.116 [2024-11-29 10:33:13.527301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.116 [2024-11-29 10:33:13.527332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:34.116 [2024-11-29 10:33:13.527341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.564 ms 00:27:34.116 [2024-11-29 10:33:13.527348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.116 [2024-11-29 10:33:13.528524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.116 [2024-11-29 10:33:13.528553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:34.116 [2024-11-29 10:33:13.528561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.149 ms 00:27:34.116 [2024-11-29 10:33:13.528568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.116 [2024-11-29 10:33:13.529455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.116 [2024-11-29 10:33:13.529485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:34.116 [2024-11-29 10:33:13.529494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:27:34.116 [2024-11-29 10:33:13.529502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.116 [2024-11-29 10:33:13.530332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.116 [2024-11-29 10:33:13.530362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:34.116 [2024-11-29 10:33:13.530371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:27:34.116 [2024-11-29 10:33:13.530378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.116 [2024-11-29 10:33:13.530405] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:34.116 [2024-11-29 10:33:13.530423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 105984 / 261120 wr_cnt: 1 state: open 00:27:34.116 [2024-11-29 10:33:13.530434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:34.116 [2024-11-29 10:33:13.530655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.530997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:34.117 [2024-11-29 10:33:13.531206] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:34.117 [2024-11-29 10:33:13.531218] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 99d4e7ab-8983-46a9-bd57-28b2bf7a4dea 00:27:34.117 [2024-11-29 10:33:13.531226] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 105984 00:27:34.117 [2024-11-29 10:33:13.531233] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 106944 00:27:34.117 [2024-11-29 10:33:13.531244] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 105984 00:27:34.117 [2024-11-29 10:33:13.531255] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:27:34.117 [2024-11-29 10:33:13.531263] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:34.117 [2024-11-29 10:33:13.531271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:34.117 [2024-11-29 10:33:13.531278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:34.117 [2024-11-29 10:33:13.531285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:34.117 [2024-11-29 10:33:13.531292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:34.117 [2024-11-29 10:33:13.531299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.117 [2024-11-29 10:33:13.531306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:34.117 [2024-11-29 10:33:13.531316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:27:34.117 [2024-11-29 10:33:13.531324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.117 [2024-11-29 10:33:13.533051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.117 [2024-11-29 10:33:13.533075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:34.117 [2024-11-29 10:33:13.533085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.714 ms 00:27:34.117 [2024-11-29 10:33:13.533094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.117 [2024-11-29 10:33:13.533191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.117 [2024-11-29 10:33:13.533201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:34.117 [2024-11-29 10:33:13.533213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:27:34.117 [2024-11-29 10:33:13.533224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.117 [2024-11-29 10:33:13.539061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.117 [2024-11-29 10:33:13.539091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:34.117 [2024-11-29 10:33:13.539101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.118 [2024-11-29 10:33:13.539109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.118 [2024-11-29 10:33:13.539163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.118 [2024-11-29 10:33:13.539172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:34.118 [2024-11-29 10:33:13.539180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.118 [2024-11-29 10:33:13.539187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.118 [2024-11-29 10:33:13.539227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.118 [2024-11-29 10:33:13.539237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:34.118 [2024-11-29 10:33:13.539246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.118 [2024-11-29 10:33:13.539253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.118 [2024-11-29 10:33:13.539268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.118 [2024-11-29 10:33:13.539280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:34.118 [2024-11-29 10:33:13.539287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.118 [2024-11-29 10:33:13.539295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.118 [2024-11-29 10:33:13.550148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.118 [2024-11-29 10:33:13.550185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:34.118 [2024-11-29 10:33:13.550196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.118 [2024-11-29 10:33:13.550204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.118 [2024-11-29 10:33:13.558956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.118 [2024-11-29 10:33:13.558995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:34.118 [2024-11-29 10:33:13.559005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.118 [2024-11-29 10:33:13.559013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.118 [2024-11-29 10:33:13.559082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.118 [2024-11-29 10:33:13.559091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:34.118 [2024-11-29 10:33:13.559107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.118 [2024-11-29 10:33:13.559115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.118 [2024-11-29 10:33:13.559140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.118 [2024-11-29 10:33:13.559148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:34.118 [2024-11-29 10:33:13.559159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.118 [2024-11-29 10:33:13.559167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.118 [2024-11-29 10:33:13.559241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.118 [2024-11-29 10:33:13.559258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:34.118 [2024-11-29 10:33:13.559266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.118 [2024-11-29 10:33:13.559275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.118 [2024-11-29 10:33:13.559304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.118 [2024-11-29 10:33:13.559315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:34.118 [2024-11-29 10:33:13.559324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.118 [2024-11-29 10:33:13.559333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.118 [2024-11-29 10:33:13.559373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.118 [2024-11-29 10:33:13.559387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:34.118 [2024-11-29 10:33:13.559395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.118 [2024-11-29 10:33:13.559410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.118 [2024-11-29 10:33:13.559457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.118 [2024-11-29 10:33:13.559468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:34.118 [2024-11-29 10:33:13.559479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.118 [2024-11-29 10:33:13.559487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.118 [2024-11-29 10:33:13.559613] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 125.525 ms, result 0 00:27:35.057 00:27:35.057 00:27:35.057 10:33:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:36.965 10:33:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:37.224 [2024-11-29 10:33:16.467568] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:27:37.224 [2024-11-29 10:33:16.467678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92693 ] 00:27:37.224 [2024-11-29 10:33:16.612537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.224 [2024-11-29 10:33:16.632202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.483 [2024-11-29 10:33:16.722330] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:37.483 [2024-11-29 10:33:16.722395] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:37.484 [2024-11-29 10:33:16.876345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.484 [2024-11-29 10:33:16.876392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:37.484 [2024-11-29 10:33:16.876408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:37.484 [2024-11-29 10:33:16.876416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.484 [2024-11-29 10:33:16.876461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.484 [2024-11-29 10:33:16.876471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:37.484 [2024-11-29 10:33:16.876479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:37.484 [2024-11-29 10:33:16.876496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.484 [2024-11-29 10:33:16.876519] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:37.484 [2024-11-29 10:33:16.876873] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:37.484 [2024-11-29 10:33:16.876894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.484 [2024-11-29 10:33:16.876901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:37.484 [2024-11-29 10:33:16.876915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:27:37.484 [2024-11-29 10:33:16.876923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.484 [2024-11-29 10:33:16.877997] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:37.484 [2024-11-29 10:33:16.880153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.484 [2024-11-29 10:33:16.880187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:37.484 [2024-11-29 10:33:16.880197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.157 ms 00:27:37.484 [2024-11-29 10:33:16.880211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.484 [2024-11-29 10:33:16.880260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.484 [2024-11-29 10:33:16.880269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:37.484 [2024-11-29 10:33:16.880277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:37.484 [2024-11-29 10:33:16.880284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.484 [2024-11-29 10:33:16.885301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.484 [2024-11-29 10:33:16.885332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:37.484 [2024-11-29 10:33:16.885343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.964 ms 00:27:37.484 [2024-11-29 10:33:16.885350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.484 [2024-11-29 10:33:16.885428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.484 [2024-11-29 10:33:16.885437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:37.484 [2024-11-29 10:33:16.885445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:37.484 [2024-11-29 10:33:16.885452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.484 [2024-11-29 10:33:16.885486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.484 [2024-11-29 10:33:16.885495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:37.484 [2024-11-29 10:33:16.885503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:37.484 [2024-11-29 10:33:16.885512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.484 [2024-11-29 10:33:16.885533] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:37.484 [2024-11-29 10:33:16.886908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.484 [2024-11-29 10:33:16.886936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:37.484 [2024-11-29 10:33:16.886944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.379 ms 00:27:37.484 [2024-11-29 10:33:16.886952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.484 [2024-11-29 10:33:16.886980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.484 [2024-11-29 10:33:16.886988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:37.484 [2024-11-29 10:33:16.886996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:37.484 [2024-11-29 10:33:16.887005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.484 [2024-11-29 10:33:16.887031] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:37.484 [2024-11-29 10:33:16.887052] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:37.484 [2024-11-29 10:33:16.887092] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:37.484 [2024-11-29 10:33:16.887106] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:37.484 [2024-11-29 10:33:16.887207] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:37.484 [2024-11-29 10:33:16.887219] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:37.484 [2024-11-29 10:33:16.887233] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:37.484 [2024-11-29 10:33:16.887243] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:37.484 [2024-11-29 10:33:16.887252] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:37.484 [2024-11-29 10:33:16.887261] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:37.484 [2024-11-29 10:33:16.887267] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:37.484 [2024-11-29 10:33:16.887274] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:37.484 [2024-11-29 10:33:16.887281] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:37.484 [2024-11-29 10:33:16.887288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.484 [2024-11-29 10:33:16.887295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:37.484 [2024-11-29 10:33:16.887302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:27:37.484 [2024-11-29 10:33:16.887309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.484 [2024-11-29 10:33:16.887395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.484 [2024-11-29 10:33:16.887404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:37.484 [2024-11-29 10:33:16.887411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:37.484 [2024-11-29 10:33:16.887418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.484 [2024-11-29 10:33:16.887518] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:37.484 [2024-11-29 10:33:16.887539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:37.484 [2024-11-29 10:33:16.887547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:37.484 [2024-11-29 10:33:16.887556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:37.484 [2024-11-29 10:33:16.887564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:37.484 [2024-11-29 10:33:16.887572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:37.484 [2024-11-29 10:33:16.887579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:37.484 [2024-11-29 10:33:16.887587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:37.484 [2024-11-29 10:33:16.887595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:37.484 [2024-11-29 10:33:16.887602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:37.484 [2024-11-29 10:33:16.887610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:37.484 [2024-11-29 10:33:16.887618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:37.484 [2024-11-29 10:33:16.887625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:37.484 [2024-11-29 10:33:16.887633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:37.484 [2024-11-29 10:33:16.887641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:37.484 [2024-11-29 10:33:16.887649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:37.484 [2024-11-29 10:33:16.887657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:37.484 [2024-11-29 10:33:16.887666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:37.484 [2024-11-29 10:33:16.887674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:37.484 [2024-11-29 10:33:16.887681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:37.484 [2024-11-29 10:33:16.887689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:37.484 [2024-11-29 10:33:16.887696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:37.484 [2024-11-29 10:33:16.887703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:37.484 [2024-11-29 10:33:16.887710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:37.484 [2024-11-29 10:33:16.887717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:37.484 [2024-11-29 10:33:16.887724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:37.484 [2024-11-29 10:33:16.887731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:37.484 [2024-11-29 10:33:16.887739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:37.484 [2024-11-29 10:33:16.887746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:37.484 [2024-11-29 10:33:16.887754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:37.484 [2024-11-29 10:33:16.887761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:37.484 [2024-11-29 10:33:16.887768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:37.484 [2024-11-29 10:33:16.887775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:37.484 [2024-11-29 10:33:16.887786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:37.484 [2024-11-29 10:33:16.887794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:37.484 [2024-11-29 10:33:16.887813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:37.484 [2024-11-29 10:33:16.887820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:37.484 [2024-11-29 10:33:16.887828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:37.484 [2024-11-29 10:33:16.887835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:37.484 [2024-11-29 10:33:16.887843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:37.485 [2024-11-29 10:33:16.887850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:37.485 [2024-11-29 10:33:16.887858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:37.485 [2024-11-29 10:33:16.887865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:37.485 [2024-11-29 10:33:16.887872] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:37.485 [2024-11-29 10:33:16.887886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:37.485 [2024-11-29 10:33:16.887894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:37.485 [2024-11-29 10:33:16.887903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:37.485 [2024-11-29 10:33:16.887911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:37.485 [2024-11-29 10:33:16.887919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:37.485 [2024-11-29 10:33:16.887929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:37.485 [2024-11-29 10:33:16.887937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:37.485 [2024-11-29 10:33:16.887944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:37.485 [2024-11-29 10:33:16.887950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:37.485 [2024-11-29 10:33:16.887958] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:37.485 [2024-11-29 10:33:16.887970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:37.485 [2024-11-29 10:33:16.887978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:37.485 [2024-11-29 10:33:16.887985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:37.485 [2024-11-29 10:33:16.887992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:37.485 [2024-11-29 10:33:16.887999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:37.485 [2024-11-29 10:33:16.888006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:37.485 [2024-11-29 10:33:16.888013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:37.485 [2024-11-29 10:33:16.888020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:37.485 [2024-11-29 10:33:16.888027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:37.485 [2024-11-29 10:33:16.888034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:37.485 [2024-11-29 10:33:16.888046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:37.485 [2024-11-29 10:33:16.888055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:37.485 [2024-11-29 10:33:16.888062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:37.485 [2024-11-29 10:33:16.888069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:37.485 [2024-11-29 10:33:16.888077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:37.485 [2024-11-29 10:33:16.888084] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:37.485 [2024-11-29 10:33:16.888092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:37.485 [2024-11-29 10:33:16.888099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:37.485 [2024-11-29 10:33:16.888106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:37.485 [2024-11-29 10:33:16.888113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:37.485 [2024-11-29 10:33:16.888120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:37.485 [2024-11-29 10:33:16.888127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.485 [2024-11-29 10:33:16.888134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:37.485 [2024-11-29 10:33:16.888141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:27:37.485 [2024-11-29 10:33:16.888151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.485 [2024-11-29 10:33:16.897187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.485 [2024-11-29 10:33:16.897221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:37.485 [2024-11-29 10:33:16.897230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.996 ms 00:27:37.485 [2024-11-29 10:33:16.897237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.485 [2024-11-29 10:33:16.897312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.485 [2024-11-29 10:33:16.897320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:37.485 [2024-11-29 10:33:16.897328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:37.485 [2024-11-29 10:33:16.897338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.485 [2024-11-29 10:33:16.915811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.485 [2024-11-29 10:33:16.915854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:37.485 [2024-11-29 10:33:16.915867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.413 ms 00:27:37.485 [2024-11-29 10:33:16.915876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.485 [2024-11-29 10:33:16.915918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.485 [2024-11-29 10:33:16.915928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:37.485 [2024-11-29 10:33:16.915937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:37.485 [2024-11-29 10:33:16.915945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.485 [2024-11-29 10:33:16.916315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.485 [2024-11-29 10:33:16.916342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:37.485 [2024-11-29 10:33:16.916353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:27:37.485 [2024-11-29 10:33:16.916361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.485 [2024-11-29 10:33:16.916495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.485 [2024-11-29 10:33:16.916513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:37.485 [2024-11-29 10:33:16.916523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:27:37.485 [2024-11-29 10:33:16.916533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.485 [2024-11-29 10:33:16.921989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.485 [2024-11-29 10:33:16.922020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:37.485 [2024-11-29 10:33:16.922031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.433 ms 00:27:37.485 [2024-11-29 10:33:16.922047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.485 [2024-11-29 10:33:16.924488] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:37.485 [2024-11-29 10:33:16.924526] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:37.485 [2024-11-29 10:33:16.924540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.485 [2024-11-29 10:33:16.924550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:37.485 [2024-11-29 10:33:16.924559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.414 ms 00:27:37.485 [2024-11-29 10:33:16.924566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.485 [2024-11-29 10:33:16.939851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.485 [2024-11-29 10:33:16.939905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:37.485 [2024-11-29 10:33:16.939917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.951 ms 00:27:37.485 [2024-11-29 10:33:16.939925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.485 [2024-11-29 10:33:16.942632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.485 [2024-11-29 10:33:16.942737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:37.485 [2024-11-29 10:33:16.942768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.664 ms 00:27:37.485 [2024-11-29 10:33:16.942788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.485 [2024-11-29 10:33:16.945682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.485 [2024-11-29 10:33:16.945758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:37.485 [2024-11-29 10:33:16.945783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.765 ms 00:27:37.485 [2024-11-29 10:33:16.945836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.485 [2024-11-29 10:33:16.946382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.485 [2024-11-29 10:33:16.946410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:37.485 [2024-11-29 10:33:16.946423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:27:37.485 [2024-11-29 10:33:16.946434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.744 [2024-11-29 10:33:16.962733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.744 [2024-11-29 10:33:16.962783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:37.744 [2024-11-29 10:33:16.962795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.266 ms 00:27:37.744 [2024-11-29 10:33:16.962817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.744 [2024-11-29 10:33:16.970175] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:37.744 [2024-11-29 10:33:16.972435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.744 [2024-11-29 10:33:16.972472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:37.744 [2024-11-29 10:33:16.972483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.579 ms 00:27:37.744 [2024-11-29 10:33:16.972492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.744 [2024-11-29 10:33:16.972559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.744 [2024-11-29 10:33:16.972570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:37.744 [2024-11-29 10:33:16.972585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:37.744 [2024-11-29 10:33:16.972599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.744 [2024-11-29 10:33:16.974044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.744 [2024-11-29 10:33:16.974082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:37.744 [2024-11-29 10:33:16.974092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:27:37.744 [2024-11-29 10:33:16.974102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.744 [2024-11-29 10:33:16.974127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.744 [2024-11-29 10:33:16.974139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:37.744 [2024-11-29 10:33:16.974147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:37.745 [2024-11-29 10:33:16.974154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.745 [2024-11-29 10:33:16.974199] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:37.745 [2024-11-29 10:33:16.974210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.745 [2024-11-29 10:33:16.974221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:37.745 [2024-11-29 10:33:16.974232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:37.745 [2024-11-29 10:33:16.974238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.745 [2024-11-29 10:33:16.977577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.745 [2024-11-29 10:33:16.977613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:37.745 [2024-11-29 10:33:16.977627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.322 ms 00:27:37.745 [2024-11-29 10:33:16.977635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.745 [2024-11-29 10:33:16.977700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.745 [2024-11-29 10:33:16.977710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:37.745 [2024-11-29 10:33:16.977723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:37.745 [2024-11-29 10:33:16.977732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.745 [2024-11-29 10:33:16.978622] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 101.881 ms, result 0 00:27:39.120  [2024-11-29T10:33:19.520Z] Copying: 960/1048576 [kB] (960 kBps) [2024-11-29T10:33:20.453Z] Copying: 4656/1048576 [kB] (3696 kBps) [2024-11-29T10:33:21.388Z] Copying: 44/1024 [MB] (39 MBps) [2024-11-29T10:33:22.323Z] Copying: 98/1024 [MB] (54 MBps) [2024-11-29T10:33:23.295Z] Copying: 152/1024 [MB] (53 MBps) [2024-11-29T10:33:24.243Z] Copying: 200/1024 [MB] (48 MBps) [2024-11-29T10:33:25.183Z] Copying: 231/1024 [MB] (31 MBps) [2024-11-29T10:33:26.555Z] Copying: 266/1024 [MB] (34 MBps) [2024-11-29T10:33:27.488Z] Copying: 318/1024 [MB] (52 MBps) [2024-11-29T10:33:28.423Z] Copying: 368/1024 [MB] (50 MBps) [2024-11-29T10:33:29.359Z] Copying: 414/1024 [MB] (45 MBps) [2024-11-29T10:33:30.305Z] Copying: 462/1024 [MB] (47 MBps) [2024-11-29T10:33:31.251Z] Copying: 489/1024 [MB] (27 MBps) [2024-11-29T10:33:32.196Z] Copying: 517/1024 [MB] (27 MBps) [2024-11-29T10:33:33.171Z] Copying: 544/1024 [MB] (26 MBps) [2024-11-29T10:33:34.584Z] Copying: 574/1024 [MB] (30 MBps) [2024-11-29T10:33:35.527Z] Copying: 604/1024 [MB] (29 MBps) [2024-11-29T10:33:36.470Z] Copying: 626/1024 [MB] (21 MBps) [2024-11-29T10:33:37.414Z] Copying: 647/1024 [MB] (20 MBps) [2024-11-29T10:33:38.355Z] Copying: 669/1024 [MB] (22 MBps) [2024-11-29T10:33:39.317Z] Copying: 693/1024 [MB] (24 MBps) [2024-11-29T10:33:40.260Z] Copying: 713/1024 [MB] (20 MBps) [2024-11-29T10:33:41.207Z] Copying: 741/1024 [MB] (27 MBps) [2024-11-29T10:33:42.595Z] Copying: 765/1024 [MB] (24 MBps) [2024-11-29T10:33:43.169Z] Copying: 782/1024 [MB] (16 MBps) [2024-11-29T10:33:44.554Z] Copying: 798/1024 [MB] (15 MBps) [2024-11-29T10:33:45.498Z] Copying: 842/1024 [MB] (44 MBps) [2024-11-29T10:33:46.441Z] Copying: 871/1024 [MB] (29 MBps) [2024-11-29T10:33:47.382Z] Copying: 896/1024 [MB] (25 MBps) [2024-11-29T10:33:48.327Z] Copying: 929/1024 [MB] (32 MBps) [2024-11-29T10:33:49.269Z] Copying: 951/1024 [MB] (22 MBps) [2024-11-29T10:33:50.215Z] Copying: 971/1024 [MB] (20 MBps) [2024-11-29T10:33:51.160Z] Copying: 994/1024 [MB] (22 MBps) [2024-11-29T10:33:51.467Z] Copying: 1018/1024 [MB] (23 MBps) [2024-11-29T10:33:52.043Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-29 10:33:51.874637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.578 [2024-11-29 10:33:51.874756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:12.578 [2024-11-29 10:33:51.874776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:12.578 [2024-11-29 10:33:51.874786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.578 [2024-11-29 10:33:51.874842] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:12.578 [2024-11-29 10:33:51.875966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.578 [2024-11-29 10:33:51.876013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:12.578 [2024-11-29 10:33:51.876026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:28:12.578 [2024-11-29 10:33:51.876035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.578 [2024-11-29 10:33:51.876398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.578 [2024-11-29 10:33:51.876421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:12.578 [2024-11-29 10:33:51.876443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:28:12.578 [2024-11-29 10:33:51.876453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.578 [2024-11-29 10:33:51.892598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.578 [2024-11-29 10:33:51.892666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:12.578 [2024-11-29 10:33:51.892691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.121 ms 00:28:12.578 [2024-11-29 10:33:51.892702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.578 [2024-11-29 10:33:51.899539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.578 [2024-11-29 10:33:51.899648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:12.578 [2024-11-29 10:33:51.899661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.794 ms 00:28:12.578 [2024-11-29 10:33:51.899671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.578 [2024-11-29 10:33:51.903224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.578 [2024-11-29 10:33:51.903277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:12.578 [2024-11-29 10:33:51.903289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.480 ms 00:28:12.578 [2024-11-29 10:33:51.903298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.578 [2024-11-29 10:33:51.909845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.578 [2024-11-29 10:33:51.909906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:12.578 [2024-11-29 10:33:51.909932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.497 ms 00:28:12.578 [2024-11-29 10:33:51.909942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.578 [2024-11-29 10:33:51.915107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.578 [2024-11-29 10:33:51.915161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:12.578 [2024-11-29 10:33:51.915174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.109 ms 00:28:12.578 [2024-11-29 10:33:51.915183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.578 [2024-11-29 10:33:51.918916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.578 [2024-11-29 10:33:51.918969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:12.578 [2024-11-29 10:33:51.918980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.713 ms 00:28:12.578 [2024-11-29 10:33:51.918988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.578 [2024-11-29 10:33:51.921999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.578 [2024-11-29 10:33:51.922049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:12.578 [2024-11-29 10:33:51.922062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.961 ms 00:28:12.578 [2024-11-29 10:33:51.922071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.578 [2024-11-29 10:33:51.924640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.578 [2024-11-29 10:33:51.924692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:12.578 [2024-11-29 10:33:51.924702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.522 ms 00:28:12.578 [2024-11-29 10:33:51.924710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.578 [2024-11-29 10:33:51.927508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.578 [2024-11-29 10:33:51.927564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:12.578 [2024-11-29 10:33:51.927576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.720 ms 00:28:12.578 [2024-11-29 10:33:51.927584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.578 [2024-11-29 10:33:51.927632] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:12.578 [2024-11-29 10:33:51.927649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:12.578 [2024-11-29 10:33:51.927662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:12.578 [2024-11-29 10:33:51.927672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:12.578 [2024-11-29 10:33:51.927681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:12.578 [2024-11-29 10:33:51.927690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:12.578 [2024-11-29 10:33:51.927699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:12.578 [2024-11-29 10:33:51.927708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:12.578 [2024-11-29 10:33:51.927716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:12.578 [2024-11-29 10:33:51.927726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:12.578 [2024-11-29 10:33:51.927734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:12.578 [2024-11-29 10:33:51.927747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:12.578 [2024-11-29 10:33:51.927757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:12.578 [2024-11-29 10:33:51.927765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:12.578 [2024-11-29 10:33:51.927775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:12.578 [2024-11-29 10:33:51.927784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:12.578 [2024-11-29 10:33:51.927791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:12.578 [2024-11-29 10:33:51.927819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:12.578 [2024-11-29 10:33:51.927828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.927998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:12.579 [2024-11-29 10:33:51.928565] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:12.579 [2024-11-29 10:33:51.928587] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 99d4e7ab-8983-46a9-bd57-28b2bf7a4dea 00:28:12.579 [2024-11-29 10:33:51.928602] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:12.579 [2024-11-29 10:33:51.928613] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 158656 00:28:12.579 [2024-11-29 10:33:51.928622] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 156672 00:28:12.579 [2024-11-29 10:33:51.928631] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0127 00:28:12.579 [2024-11-29 10:33:51.928639] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:12.579 [2024-11-29 10:33:51.928648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:12.579 [2024-11-29 10:33:51.928656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:12.580 [2024-11-29 10:33:51.928671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:12.580 [2024-11-29 10:33:51.928677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:12.580 [2024-11-29 10:33:51.928686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.580 [2024-11-29 10:33:51.928695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:12.580 [2024-11-29 10:33:51.928707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:28:12.580 [2024-11-29 10:33:51.928715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.580 [2024-11-29 10:33:51.932137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.580 [2024-11-29 10:33:51.932182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:12.580 [2024-11-29 10:33:51.932195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.397 ms 00:28:12.580 [2024-11-29 10:33:51.932205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.580 [2024-11-29 10:33:51.932365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.580 [2024-11-29 10:33:51.932388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:12.580 [2024-11-29 10:33:51.932398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:28:12.580 [2024-11-29 10:33:51.932412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.580 [2024-11-29 10:33:51.942908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.580 [2024-11-29 10:33:51.942960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:12.580 [2024-11-29 10:33:51.942972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.580 [2024-11-29 10:33:51.942982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.580 [2024-11-29 10:33:51.943055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.580 [2024-11-29 10:33:51.943080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:12.580 [2024-11-29 10:33:51.943090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.580 [2024-11-29 10:33:51.943101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.580 [2024-11-29 10:33:51.943168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.580 [2024-11-29 10:33:51.943181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:12.580 [2024-11-29 10:33:51.943191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.580 [2024-11-29 10:33:51.943199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.580 [2024-11-29 10:33:51.943222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.580 [2024-11-29 10:33:51.943234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:12.580 [2024-11-29 10:33:51.943242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.580 [2024-11-29 10:33:51.943254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.580 [2024-11-29 10:33:51.963334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.580 [2024-11-29 10:33:51.963395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:12.580 [2024-11-29 10:33:51.963408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.580 [2024-11-29 10:33:51.963418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.580 [2024-11-29 10:33:51.978588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.580 [2024-11-29 10:33:51.978666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:12.580 [2024-11-29 10:33:51.978685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.580 [2024-11-29 10:33:51.978694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.580 [2024-11-29 10:33:51.978762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.580 [2024-11-29 10:33:51.978774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:12.580 [2024-11-29 10:33:51.978783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.580 [2024-11-29 10:33:51.978794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.580 [2024-11-29 10:33:51.978896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.580 [2024-11-29 10:33:51.978910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:12.580 [2024-11-29 10:33:51.978920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.580 [2024-11-29 10:33:51.978929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.580 [2024-11-29 10:33:51.979044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.580 [2024-11-29 10:33:51.979059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:12.580 [2024-11-29 10:33:51.979073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.580 [2024-11-29 10:33:51.979083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.580 [2024-11-29 10:33:51.979119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.580 [2024-11-29 10:33:51.979132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:12.580 [2024-11-29 10:33:51.979142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.580 [2024-11-29 10:33:51.979154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.580 [2024-11-29 10:33:51.979220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.580 [2024-11-29 10:33:51.979235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:12.580 [2024-11-29 10:33:51.979246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.580 [2024-11-29 10:33:51.979255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.580 [2024-11-29 10:33:51.979311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.580 [2024-11-29 10:33:51.979325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:12.580 [2024-11-29 10:33:51.979335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.580 [2024-11-29 10:33:51.979344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.580 [2024-11-29 10:33:51.979523] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 104.849 ms, result 0 00:28:13.151 00:28:13.151 00:28:13.151 10:33:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:15.698 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:15.698 10:33:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:15.698 [2024-11-29 10:33:54.676312] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:15.698 [2024-11-29 10:33:54.676469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93086 ] 00:28:15.698 [2024-11-29 10:33:54.820150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.698 [2024-11-29 10:33:54.854543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.698 [2024-11-29 10:33:54.957779] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:15.698 [2024-11-29 10:33:54.957843] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:15.698 [2024-11-29 10:33:55.114660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.698 [2024-11-29 10:33:55.114694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:15.698 [2024-11-29 10:33:55.114706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:15.698 [2024-11-29 10:33:55.114713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.698 [2024-11-29 10:33:55.114752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.698 [2024-11-29 10:33:55.114760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:15.698 [2024-11-29 10:33:55.114767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:15.698 [2024-11-29 10:33:55.114780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.698 [2024-11-29 10:33:55.114814] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:15.698 [2024-11-29 10:33:55.115002] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:15.698 [2024-11-29 10:33:55.115014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.699 [2024-11-29 10:33:55.115021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:15.699 [2024-11-29 10:33:55.115030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:28:15.699 [2024-11-29 10:33:55.115036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.699 [2024-11-29 10:33:55.116268] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:15.699 [2024-11-29 10:33:55.119210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.699 [2024-11-29 10:33:55.119236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:15.699 [2024-11-29 10:33:55.119244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.943 ms 00:28:15.699 [2024-11-29 10:33:55.119260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.699 [2024-11-29 10:33:55.119305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.699 [2024-11-29 10:33:55.119316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:15.699 [2024-11-29 10:33:55.119325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:15.699 [2024-11-29 10:33:55.119331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.699 [2024-11-29 10:33:55.125464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.699 [2024-11-29 10:33:55.125485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:15.699 [2024-11-29 10:33:55.125497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.086 ms 00:28:15.699 [2024-11-29 10:33:55.125503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.699 [2024-11-29 10:33:55.125572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.699 [2024-11-29 10:33:55.125583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:15.699 [2024-11-29 10:33:55.125591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:15.699 [2024-11-29 10:33:55.125597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.699 [2024-11-29 10:33:55.125631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.699 [2024-11-29 10:33:55.125638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:15.699 [2024-11-29 10:33:55.125644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:15.699 [2024-11-29 10:33:55.125653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.699 [2024-11-29 10:33:55.125674] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:15.699 [2024-11-29 10:33:55.127233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.699 [2024-11-29 10:33:55.127253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:15.699 [2024-11-29 10:33:55.127261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.567 ms 00:28:15.699 [2024-11-29 10:33:55.127266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.699 [2024-11-29 10:33:55.127291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.699 [2024-11-29 10:33:55.127297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:15.699 [2024-11-29 10:33:55.127307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:15.699 [2024-11-29 10:33:55.127317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.699 [2024-11-29 10:33:55.127333] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:15.699 [2024-11-29 10:33:55.127350] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:15.699 [2024-11-29 10:33:55.127386] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:15.699 [2024-11-29 10:33:55.127401] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:15.699 [2024-11-29 10:33:55.127485] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:15.699 [2024-11-29 10:33:55.127493] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:15.699 [2024-11-29 10:33:55.127504] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:15.699 [2024-11-29 10:33:55.127512] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:15.699 [2024-11-29 10:33:55.127519] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:15.699 [2024-11-29 10:33:55.127525] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:15.699 [2024-11-29 10:33:55.127531] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:15.699 [2024-11-29 10:33:55.127537] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:15.699 [2024-11-29 10:33:55.127543] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:15.699 [2024-11-29 10:33:55.127550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.699 [2024-11-29 10:33:55.127559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:15.699 [2024-11-29 10:33:55.127566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:28:15.699 [2024-11-29 10:33:55.127572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.699 [2024-11-29 10:33:55.127638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.699 [2024-11-29 10:33:55.127650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:15.699 [2024-11-29 10:33:55.127657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:15.699 [2024-11-29 10:33:55.127662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.699 [2024-11-29 10:33:55.127745] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:15.699 [2024-11-29 10:33:55.127754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:15.699 [2024-11-29 10:33:55.127762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:15.699 [2024-11-29 10:33:55.127768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.699 [2024-11-29 10:33:55.127775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:15.699 [2024-11-29 10:33:55.127781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:15.699 [2024-11-29 10:33:55.127786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:15.699 [2024-11-29 10:33:55.127793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:15.699 [2024-11-29 10:33:55.127809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:15.699 [2024-11-29 10:33:55.127817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:15.699 [2024-11-29 10:33:55.127823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:15.699 [2024-11-29 10:33:55.127828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:15.699 [2024-11-29 10:33:55.127833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:15.699 [2024-11-29 10:33:55.127838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:15.699 [2024-11-29 10:33:55.127845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:15.699 [2024-11-29 10:33:55.127850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.699 [2024-11-29 10:33:55.127856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:15.699 [2024-11-29 10:33:55.127862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:15.699 [2024-11-29 10:33:55.127867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.699 [2024-11-29 10:33:55.127873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:15.699 [2024-11-29 10:33:55.127878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:15.699 [2024-11-29 10:33:55.127885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.699 [2024-11-29 10:33:55.127892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:15.699 [2024-11-29 10:33:55.127898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:15.699 [2024-11-29 10:33:55.127904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.699 [2024-11-29 10:33:55.127916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:15.699 [2024-11-29 10:33:55.127922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:15.699 [2024-11-29 10:33:55.127928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.699 [2024-11-29 10:33:55.127934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:15.699 [2024-11-29 10:33:55.127940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:15.699 [2024-11-29 10:33:55.127946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.699 [2024-11-29 10:33:55.127952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:15.699 [2024-11-29 10:33:55.127957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:15.699 [2024-11-29 10:33:55.127963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:15.699 [2024-11-29 10:33:55.127970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:15.699 [2024-11-29 10:33:55.127976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:15.699 [2024-11-29 10:33:55.127983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:15.699 [2024-11-29 10:33:55.127989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:15.699 [2024-11-29 10:33:55.127995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:15.699 [2024-11-29 10:33:55.128003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.699 [2024-11-29 10:33:55.128010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:15.699 [2024-11-29 10:33:55.128018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:15.699 [2024-11-29 10:33:55.128024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.699 [2024-11-29 10:33:55.128029] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:15.699 [2024-11-29 10:33:55.128039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:15.699 [2024-11-29 10:33:55.128046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:15.699 [2024-11-29 10:33:55.128053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.699 [2024-11-29 10:33:55.128059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:15.699 [2024-11-29 10:33:55.128065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:15.699 [2024-11-29 10:33:55.128072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:15.699 [2024-11-29 10:33:55.128078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:15.699 [2024-11-29 10:33:55.128083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:15.699 [2024-11-29 10:33:55.128090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:15.700 [2024-11-29 10:33:55.128096] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:15.700 [2024-11-29 10:33:55.128105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:15.700 [2024-11-29 10:33:55.128113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:15.700 [2024-11-29 10:33:55.128120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:15.700 [2024-11-29 10:33:55.128128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:15.700 [2024-11-29 10:33:55.128134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:15.700 [2024-11-29 10:33:55.128140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:15.700 [2024-11-29 10:33:55.128146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:15.700 [2024-11-29 10:33:55.128153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:15.700 [2024-11-29 10:33:55.128160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:15.700 [2024-11-29 10:33:55.128166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:15.700 [2024-11-29 10:33:55.128177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:15.700 [2024-11-29 10:33:55.128183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:15.700 [2024-11-29 10:33:55.128189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:15.700 [2024-11-29 10:33:55.128195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:15.700 [2024-11-29 10:33:55.128203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:15.700 [2024-11-29 10:33:55.128209] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:15.700 [2024-11-29 10:33:55.128216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:15.700 [2024-11-29 10:33:55.128224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:15.700 [2024-11-29 10:33:55.128231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:15.700 [2024-11-29 10:33:55.128239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:15.700 [2024-11-29 10:33:55.128246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:15.700 [2024-11-29 10:33:55.128253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.700 [2024-11-29 10:33:55.128259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:15.700 [2024-11-29 10:33:55.128265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:28:15.700 [2024-11-29 10:33:55.128273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.700 [2024-11-29 10:33:55.139277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.700 [2024-11-29 10:33:55.139305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:15.700 [2024-11-29 10:33:55.139313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.968 ms 00:28:15.700 [2024-11-29 10:33:55.139319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.700 [2024-11-29 10:33:55.139380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.700 [2024-11-29 10:33:55.139389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:15.700 [2024-11-29 10:33:55.139395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:28:15.700 [2024-11-29 10:33:55.139401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.161899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.161964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:15.960 [2024-11-29 10:33:55.161986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.457 ms 00:28:15.960 [2024-11-29 10:33:55.162001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.162067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.162085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:15.960 [2024-11-29 10:33:55.162108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:15.960 [2024-11-29 10:33:55.162122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.162676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.162716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:15.960 [2024-11-29 10:33:55.162735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:28:15.960 [2024-11-29 10:33:55.162751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.163007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.163034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:15.960 [2024-11-29 10:33:55.163049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:28:15.960 [2024-11-29 10:33:55.163063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.170050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.170073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:15.960 [2024-11-29 10:33:55.170080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.954 ms 00:28:15.960 [2024-11-29 10:33:55.170086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.173230] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:15.960 [2024-11-29 10:33:55.173254] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:15.960 [2024-11-29 10:33:55.173270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.173277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:15.960 [2024-11-29 10:33:55.173284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.111 ms 00:28:15.960 [2024-11-29 10:33:55.173290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.185336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.185367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:15.960 [2024-11-29 10:33:55.185384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.014 ms 00:28:15.960 [2024-11-29 10:33:55.185390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.187449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.187473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:15.960 [2024-11-29 10:33:55.187480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.024 ms 00:28:15.960 [2024-11-29 10:33:55.187486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.188946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.188966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:15.960 [2024-11-29 10:33:55.188974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.433 ms 00:28:15.960 [2024-11-29 10:33:55.188986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.189283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.189302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:15.960 [2024-11-29 10:33:55.189311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:28:15.960 [2024-11-29 10:33:55.189317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.207181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.207208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:15.960 [2024-11-29 10:33:55.207216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.845 ms 00:28:15.960 [2024-11-29 10:33:55.207223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.213112] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:15.960 [2024-11-29 10:33:55.215271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.215297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:15.960 [2024-11-29 10:33:55.215308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.019 ms 00:28:15.960 [2024-11-29 10:33:55.215314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.215370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.215378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:15.960 [2024-11-29 10:33:55.215393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:15.960 [2024-11-29 10:33:55.215399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.216040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.216064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:15.960 [2024-11-29 10:33:55.216071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:28:15.960 [2024-11-29 10:33:55.216077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.216097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.216104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:15.960 [2024-11-29 10:33:55.216111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:15.960 [2024-11-29 10:33:55.216118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.216147] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:15.960 [2024-11-29 10:33:55.216156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.216163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:15.960 [2024-11-29 10:33:55.216172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:15.960 [2024-11-29 10:33:55.216180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.220020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.220048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:15.960 [2024-11-29 10:33:55.220055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.826 ms 00:28:15.960 [2024-11-29 10:33:55.220062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.220117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.960 [2024-11-29 10:33:55.220125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:15.960 [2024-11-29 10:33:55.220131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:15.960 [2024-11-29 10:33:55.220143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.960 [2024-11-29 10:33:55.221060] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 106.008 ms, result 0 00:28:16.903  [2024-11-29T10:33:57.752Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-29T10:33:58.693Z] Copying: 34/1024 [MB] (16 MBps) [2024-11-29T10:33:59.637Z] Copying: 44/1024 [MB] (10 MBps) [2024-11-29T10:34:00.581Z] Copying: 56232/1048576 [kB] (10220 kBps) [2024-11-29T10:34:01.523Z] Copying: 65/1024 [MB] (10 MBps) [2024-11-29T10:34:02.465Z] Copying: 76/1024 [MB] (11 MBps) [2024-11-29T10:34:03.411Z] Copying: 88/1024 [MB] (12 MBps) [2024-11-29T10:34:04.355Z] Copying: 100/1024 [MB] (11 MBps) [2024-11-29T10:34:05.760Z] Copying: 111/1024 [MB] (11 MBps) [2024-11-29T10:34:06.706Z] Copying: 122/1024 [MB] (10 MBps) [2024-11-29T10:34:07.650Z] Copying: 133/1024 [MB] (10 MBps) [2024-11-29T10:34:08.595Z] Copying: 143/1024 [MB] (10 MBps) [2024-11-29T10:34:09.537Z] Copying: 155/1024 [MB] (11 MBps) [2024-11-29T10:34:10.479Z] Copying: 168/1024 [MB] (13 MBps) [2024-11-29T10:34:11.420Z] Copying: 185/1024 [MB] (16 MBps) [2024-11-29T10:34:12.364Z] Copying: 220/1024 [MB] (34 MBps) [2024-11-29T10:34:13.754Z] Copying: 254/1024 [MB] (33 MBps) [2024-11-29T10:34:14.700Z] Copying: 292/1024 [MB] (38 MBps) [2024-11-29T10:34:15.649Z] Copying: 320/1024 [MB] (27 MBps) [2024-11-29T10:34:16.596Z] Copying: 335/1024 [MB] (14 MBps) [2024-11-29T10:34:17.543Z] Copying: 345/1024 [MB] (10 MBps) [2024-11-29T10:34:18.490Z] Copying: 362/1024 [MB] (17 MBps) [2024-11-29T10:34:19.444Z] Copying: 373/1024 [MB] (10 MBps) [2024-11-29T10:34:20.391Z] Copying: 384/1024 [MB] (10 MBps) [2024-11-29T10:34:21.778Z] Copying: 395/1024 [MB] (11 MBps) [2024-11-29T10:34:22.723Z] Copying: 415528/1048576 [kB] (10152 kBps) [2024-11-29T10:34:23.667Z] Copying: 425628/1048576 [kB] (10100 kBps) [2024-11-29T10:34:24.612Z] Copying: 426/1024 [MB] (10 MBps) [2024-11-29T10:34:25.556Z] Copying: 436/1024 [MB] (10 MBps) [2024-11-29T10:34:26.496Z] Copying: 451/1024 [MB] (14 MBps) [2024-11-29T10:34:27.433Z] Copying: 469/1024 [MB] (18 MBps) [2024-11-29T10:34:28.370Z] Copying: 481/1024 [MB] (12 MBps) [2024-11-29T10:34:29.757Z] Copying: 499/1024 [MB] (17 MBps) [2024-11-29T10:34:30.703Z] Copying: 513/1024 [MB] (14 MBps) [2024-11-29T10:34:31.648Z] Copying: 533/1024 [MB] (19 MBps) [2024-11-29T10:34:32.591Z] Copying: 553/1024 [MB] (20 MBps) [2024-11-29T10:34:33.535Z] Copying: 573/1024 [MB] (19 MBps) [2024-11-29T10:34:34.479Z] Copying: 590/1024 [MB] (16 MBps) [2024-11-29T10:34:35.422Z] Copying: 604/1024 [MB] (14 MBps) [2024-11-29T10:34:36.362Z] Copying: 622/1024 [MB] (18 MBps) [2024-11-29T10:34:37.749Z] Copying: 640/1024 [MB] (17 MBps) [2024-11-29T10:34:38.695Z] Copying: 652/1024 [MB] (12 MBps) [2024-11-29T10:34:39.643Z] Copying: 666/1024 [MB] (13 MBps) [2024-11-29T10:34:40.587Z] Copying: 677/1024 [MB] (11 MBps) [2024-11-29T10:34:41.534Z] Copying: 688/1024 [MB] (10 MBps) [2024-11-29T10:34:42.477Z] Copying: 698/1024 [MB] (10 MBps) [2024-11-29T10:34:43.512Z] Copying: 708/1024 [MB] (10 MBps) [2024-11-29T10:34:44.455Z] Copying: 719/1024 [MB] (10 MBps) [2024-11-29T10:34:45.396Z] Copying: 729/1024 [MB] (10 MBps) [2024-11-29T10:34:46.776Z] Copying: 740/1024 [MB] (10 MBps) [2024-11-29T10:34:47.714Z] Copying: 750/1024 [MB] (10 MBps) [2024-11-29T10:34:48.653Z] Copying: 761/1024 [MB] (10 MBps) [2024-11-29T10:34:49.594Z] Copying: 771/1024 [MB] (10 MBps) [2024-11-29T10:34:50.535Z] Copying: 800092/1048576 [kB] (9828 kBps) [2024-11-29T10:34:51.475Z] Copying: 792/1024 [MB] (11 MBps) [2024-11-29T10:34:52.419Z] Copying: 804/1024 [MB] (11 MBps) [2024-11-29T10:34:53.405Z] Copying: 816/1024 [MB] (12 MBps) [2024-11-29T10:34:54.793Z] Copying: 827/1024 [MB] (10 MBps) [2024-11-29T10:34:55.370Z] Copying: 837/1024 [MB] (10 MBps) [2024-11-29T10:34:56.760Z] Copying: 867728/1048576 [kB] (10176 kBps) [2024-11-29T10:34:57.702Z] Copying: 877848/1048576 [kB] (10120 kBps) [2024-11-29T10:34:58.646Z] Copying: 867/1024 [MB] (10 MBps) [2024-11-29T10:34:59.591Z] Copying: 898368/1048576 [kB] (10112 kBps) [2024-11-29T10:35:00.535Z] Copying: 887/1024 [MB] (10 MBps) [2024-11-29T10:35:01.480Z] Copying: 898/1024 [MB] (11 MBps) [2024-11-29T10:35:02.423Z] Copying: 911/1024 [MB] (12 MBps) [2024-11-29T10:35:03.368Z] Copying: 922/1024 [MB] (11 MBps) [2024-11-29T10:35:04.745Z] Copying: 934/1024 [MB] (11 MBps) [2024-11-29T10:35:05.690Z] Copying: 966/1024 [MB] (31 MBps) [2024-11-29T10:35:05.948Z] Copying: 1009/1024 [MB] (43 MBps) [2024-11-29T10:35:05.948Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-29 10:35:05.867202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.483 [2024-11-29 10:35:05.867278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:26.483 [2024-11-29 10:35:05.867296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:26.483 [2024-11-29 10:35:05.867305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.483 [2024-11-29 10:35:05.867328] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:26.483 [2024-11-29 10:35:05.867764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.483 [2024-11-29 10:35:05.867787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:26.483 [2024-11-29 10:35:05.867797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:29:26.483 [2024-11-29 10:35:05.867817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.483 [2024-11-29 10:35:05.868036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.483 [2024-11-29 10:35:05.868056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:26.483 [2024-11-29 10:35:05.868065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:29:26.483 [2024-11-29 10:35:05.868076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.483 [2024-11-29 10:35:05.873818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.483 [2024-11-29 10:35:05.873849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:26.483 [2024-11-29 10:35:05.873866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.726 ms 00:29:26.483 [2024-11-29 10:35:05.873879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.483 [2024-11-29 10:35:05.884029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.483 [2024-11-29 10:35:05.884050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:26.483 [2024-11-29 10:35:05.884060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.116 ms 00:29:26.483 [2024-11-29 10:35:05.884068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.483 [2024-11-29 10:35:05.885255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.483 [2024-11-29 10:35:05.885283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:26.483 [2024-11-29 10:35:05.885292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:29:26.483 [2024-11-29 10:35:05.885299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.483 [2024-11-29 10:35:05.888534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.483 [2024-11-29 10:35:05.888560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:26.483 [2024-11-29 10:35:05.888576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.218 ms 00:29:26.483 [2024-11-29 10:35:05.888584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.483 [2024-11-29 10:35:05.889578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.483 [2024-11-29 10:35:05.889606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:26.483 [2024-11-29 10:35:05.889615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:29:26.483 [2024-11-29 10:35:05.889630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.483 [2024-11-29 10:35:05.891050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.483 [2024-11-29 10:35:05.891069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:26.483 [2024-11-29 10:35:05.891078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.406 ms 00:29:26.483 [2024-11-29 10:35:05.891085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.484 [2024-11-29 10:35:05.892185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.484 [2024-11-29 10:35:05.892209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:26.484 [2024-11-29 10:35:05.892217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:29:26.484 [2024-11-29 10:35:05.892223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.484 [2024-11-29 10:35:05.893369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.484 [2024-11-29 10:35:05.893406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:26.484 [2024-11-29 10:35:05.893416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.129 ms 00:29:26.484 [2024-11-29 10:35:05.893425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.484 [2024-11-29 10:35:05.894253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.484 [2024-11-29 10:35:05.894279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:26.484 [2024-11-29 10:35:05.894288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:29:26.484 [2024-11-29 10:35:05.894295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.484 [2024-11-29 10:35:05.894310] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:26.484 [2024-11-29 10:35:05.894330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:26.484 [2024-11-29 10:35:05.894340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:26.484 [2024-11-29 10:35:05.894348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:26.484 [2024-11-29 10:35:05.894927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.894935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.894943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.894951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.894958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.894965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.894973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.894980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.894987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.894995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.895002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.895009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.895016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.895023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.895031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.895039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.895046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.895053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.895060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.895067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.895075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.895082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:26.485 [2024-11-29 10:35:05.895097] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:26.485 [2024-11-29 10:35:05.895104] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 99d4e7ab-8983-46a9-bd57-28b2bf7a4dea 00:29:26.485 [2024-11-29 10:35:05.895111] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:26.485 [2024-11-29 10:35:05.895118] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:26.485 [2024-11-29 10:35:05.895125] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:26.485 [2024-11-29 10:35:05.895136] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:26.485 [2024-11-29 10:35:05.895143] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:26.485 [2024-11-29 10:35:05.895153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:26.485 [2024-11-29 10:35:05.895163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:26.485 [2024-11-29 10:35:05.895169] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:26.485 [2024-11-29 10:35:05.895175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:26.485 [2024-11-29 10:35:05.895182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.485 [2024-11-29 10:35:05.895199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:26.485 [2024-11-29 10:35:05.895207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:29:26.485 [2024-11-29 10:35:05.895214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.485 [2024-11-29 10:35:05.896506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.485 [2024-11-29 10:35:05.896529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:26.485 [2024-11-29 10:35:05.896538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:29:26.485 [2024-11-29 10:35:05.896551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.485 [2024-11-29 10:35:05.896624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.485 [2024-11-29 10:35:05.896632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:26.485 [2024-11-29 10:35:05.896640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:26.485 [2024-11-29 10:35:05.896647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.485 [2024-11-29 10:35:05.901245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.485 [2024-11-29 10:35:05.901269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:26.485 [2024-11-29 10:35:05.901278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.485 [2024-11-29 10:35:05.901289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.485 [2024-11-29 10:35:05.901332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.485 [2024-11-29 10:35:05.901340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:26.485 [2024-11-29 10:35:05.901348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.485 [2024-11-29 10:35:05.901354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.485 [2024-11-29 10:35:05.901386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.485 [2024-11-29 10:35:05.901395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:26.485 [2024-11-29 10:35:05.901407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.485 [2024-11-29 10:35:05.901414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.485 [2024-11-29 10:35:05.901434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.485 [2024-11-29 10:35:05.901442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:26.485 [2024-11-29 10:35:05.901450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.485 [2024-11-29 10:35:05.901456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.485 [2024-11-29 10:35:05.909862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.485 [2024-11-29 10:35:05.909897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:26.485 [2024-11-29 10:35:05.909907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.485 [2024-11-29 10:35:05.909918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.485 [2024-11-29 10:35:05.916449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.485 [2024-11-29 10:35:05.916486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:26.485 [2024-11-29 10:35:05.916495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.485 [2024-11-29 10:35:05.916508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.485 [2024-11-29 10:35:05.916547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.485 [2024-11-29 10:35:05.916559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:26.485 [2024-11-29 10:35:05.916567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.485 [2024-11-29 10:35:05.916574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.485 [2024-11-29 10:35:05.916599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.485 [2024-11-29 10:35:05.916607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:26.485 [2024-11-29 10:35:05.916614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.485 [2024-11-29 10:35:05.916622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.485 [2024-11-29 10:35:05.916678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.485 [2024-11-29 10:35:05.916687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:26.485 [2024-11-29 10:35:05.916694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.485 [2024-11-29 10:35:05.916701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.485 [2024-11-29 10:35:05.916730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.485 [2024-11-29 10:35:05.916741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:26.485 [2024-11-29 10:35:05.916749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.485 [2024-11-29 10:35:05.916756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.485 [2024-11-29 10:35:05.916789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.485 [2024-11-29 10:35:05.916796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:26.485 [2024-11-29 10:35:05.916817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.485 [2024-11-29 10:35:05.916824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.485 [2024-11-29 10:35:05.916865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.485 [2024-11-29 10:35:05.916874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:26.485 [2024-11-29 10:35:05.916882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.485 [2024-11-29 10:35:05.916894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.485 [2024-11-29 10:35:05.917010] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.776 ms, result 0 00:29:26.743 00:29:26.743 00:29:26.743 10:35:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:29.290 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:29.290 10:35:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:29.290 10:35:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:29.290 10:35:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:29.290 10:35:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:29.290 10:35:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:29.290 10:35:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:29.290 10:35:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:29.290 10:35:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 91513 00:29:29.290 10:35:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91513 ']' 00:29:29.290 10:35:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 91513 00:29:29.290 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (91513) - No such process 00:29:29.290 Process with pid 91513 is not found 00:29:29.290 10:35:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 91513 is not found' 00:29:29.290 10:35:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:29.551 Remove shared memory files 00:29:29.551 10:35:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:29.551 10:35:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:29.551 10:35:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:29.551 10:35:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:29.551 10:35:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:29.551 10:35:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:29.551 10:35:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:29.551 ************************************ 00:29:29.551 END TEST ftl_dirty_shutdown 00:29:29.551 ************************************ 00:29:29.551 00:29:29.551 real 3m40.719s 00:29:29.551 user 3m55.488s 00:29:29.551 sys 0m23.169s 00:29:29.551 10:35:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.551 10:35:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:29.551 10:35:08 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:29.551 10:35:08 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:29.551 10:35:08 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.551 10:35:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:29.551 ************************************ 00:29:29.551 START TEST ftl_upgrade_shutdown 00:29:29.551 ************************************ 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:29.551 * Looking for test storage... 00:29:29.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:29.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.551 --rc genhtml_branch_coverage=1 00:29:29.551 --rc genhtml_function_coverage=1 00:29:29.551 --rc genhtml_legend=1 00:29:29.551 --rc geninfo_all_blocks=1 00:29:29.551 --rc geninfo_unexecuted_blocks=1 00:29:29.551 00:29:29.551 ' 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:29.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.551 --rc genhtml_branch_coverage=1 00:29:29.551 --rc genhtml_function_coverage=1 00:29:29.551 --rc genhtml_legend=1 00:29:29.551 --rc geninfo_all_blocks=1 00:29:29.551 --rc geninfo_unexecuted_blocks=1 00:29:29.551 00:29:29.551 ' 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:29.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.551 --rc genhtml_branch_coverage=1 00:29:29.551 --rc genhtml_function_coverage=1 00:29:29.551 --rc genhtml_legend=1 00:29:29.551 --rc geninfo_all_blocks=1 00:29:29.551 --rc geninfo_unexecuted_blocks=1 00:29:29.551 00:29:29.551 ' 00:29:29.551 10:35:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:29.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.552 --rc genhtml_branch_coverage=1 00:29:29.552 --rc genhtml_function_coverage=1 00:29:29.552 --rc genhtml_legend=1 00:29:29.552 --rc geninfo_all_blocks=1 00:29:29.552 --rc geninfo_unexecuted_blocks=1 00:29:29.552 00:29:29.552 ' 00:29:29.552 10:35:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:29.552 10:35:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:29.552 10:35:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:29.552 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:29.815 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:29.815 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93909 00:29:29.815 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:29.815 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93909 00:29:29.815 10:35:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93909 ']' 00:29:29.815 10:35:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.815 10:35:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:29.815 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:29.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.815 10:35:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.815 10:35:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:29.815 10:35:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:29.815 [2024-11-29 10:35:09.089869] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:29.815 [2024-11-29 10:35:09.090012] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93909 ] 00:29:29.815 [2024-11-29 10:35:09.234333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.815 [2024-11-29 10:35:09.265255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:30.756 10:35:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:30.756 10:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:30.756 10:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:30.756 10:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:30.756 10:35:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:30.756 10:35:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:30.756 10:35:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:30.756 10:35:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:30.756 10:35:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:31.018 10:35:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:31.018 { 00:29:31.018 "name": "basen1", 00:29:31.018 "aliases": [ 00:29:31.018 "7fbccf29-04f5-4c82-adcc-00dd2dbedd25" 00:29:31.018 ], 00:29:31.018 "product_name": "NVMe disk", 00:29:31.018 "block_size": 4096, 00:29:31.018 "num_blocks": 1310720, 00:29:31.018 "uuid": "7fbccf29-04f5-4c82-adcc-00dd2dbedd25", 00:29:31.019 "numa_id": -1, 00:29:31.019 "assigned_rate_limits": { 00:29:31.019 "rw_ios_per_sec": 0, 00:29:31.019 "rw_mbytes_per_sec": 0, 00:29:31.019 "r_mbytes_per_sec": 0, 00:29:31.019 "w_mbytes_per_sec": 0 00:29:31.019 }, 00:29:31.019 "claimed": true, 00:29:31.019 "claim_type": "read_many_write_one", 00:29:31.019 "zoned": false, 00:29:31.019 "supported_io_types": { 00:29:31.019 "read": true, 00:29:31.019 "write": true, 00:29:31.019 "unmap": true, 00:29:31.019 "flush": true, 00:29:31.019 "reset": true, 00:29:31.019 "nvme_admin": true, 00:29:31.019 "nvme_io": true, 00:29:31.019 "nvme_io_md": false, 00:29:31.019 "write_zeroes": true, 00:29:31.019 "zcopy": false, 00:29:31.019 "get_zone_info": false, 00:29:31.019 "zone_management": false, 00:29:31.019 "zone_append": false, 00:29:31.019 "compare": true, 00:29:31.019 "compare_and_write": false, 00:29:31.019 "abort": true, 00:29:31.019 "seek_hole": false, 00:29:31.019 "seek_data": false, 00:29:31.019 "copy": true, 00:29:31.019 "nvme_iov_md": false 00:29:31.019 }, 00:29:31.019 "driver_specific": { 00:29:31.019 "nvme": [ 00:29:31.019 { 00:29:31.019 "pci_address": "0000:00:11.0", 00:29:31.019 "trid": { 00:29:31.019 "trtype": "PCIe", 00:29:31.019 "traddr": "0000:00:11.0" 00:29:31.019 }, 00:29:31.019 "ctrlr_data": { 00:29:31.019 "cntlid": 0, 00:29:31.019 "vendor_id": "0x1b36", 00:29:31.019 "model_number": "QEMU NVMe Ctrl", 00:29:31.019 "serial_number": "12341", 00:29:31.019 "firmware_revision": "8.0.0", 00:29:31.019 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:31.019 "oacs": { 00:29:31.019 "security": 0, 00:29:31.019 "format": 1, 00:29:31.019 "firmware": 0, 00:29:31.019 "ns_manage": 1 00:29:31.019 }, 00:29:31.019 "multi_ctrlr": false, 00:29:31.019 "ana_reporting": false 00:29:31.019 }, 00:29:31.019 "vs": { 00:29:31.019 "nvme_version": "1.4" 00:29:31.019 }, 00:29:31.019 "ns_data": { 00:29:31.019 "id": 1, 00:29:31.019 "can_share": false 00:29:31.019 } 00:29:31.019 } 00:29:31.019 ], 00:29:31.019 "mp_policy": "active_passive" 00:29:31.019 } 00:29:31.019 } 00:29:31.019 ]' 00:29:31.019 10:35:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:31.019 10:35:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:31.019 10:35:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:31.019 10:35:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:31.019 10:35:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:31.019 10:35:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:31.019 10:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:31.019 10:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:31.019 10:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:31.019 10:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:31.019 10:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:31.280 10:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=1ba34245-3904-43ad-aa21-908d44dd6124 00:29:31.280 10:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:31.280 10:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1ba34245-3904-43ad-aa21-908d44dd6124 00:29:31.541 10:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:31.831 10:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=be860034-09bc-4064-ab0c-a3f5274d1004 00:29:31.831 10:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u be860034-09bc-4064-ab0c-a3f5274d1004 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=8945b1ff-3a5a-4b3e-8cc9-40975ed9a7a8 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 8945b1ff-3a5a-4b3e-8cc9-40975ed9a7a8 ]] 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 8945b1ff-3a5a-4b3e-8cc9-40975ed9a7a8 5120 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=8945b1ff-3a5a-4b3e-8cc9-40975ed9a7a8 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 8945b1ff-3a5a-4b3e-8cc9-40975ed9a7a8 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=8945b1ff-3a5a-4b3e-8cc9-40975ed9a7a8 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8945b1ff-3a5a-4b3e-8cc9-40975ed9a7a8 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:32.120 { 00:29:32.120 "name": "8945b1ff-3a5a-4b3e-8cc9-40975ed9a7a8", 00:29:32.120 "aliases": [ 00:29:32.120 "lvs/basen1p0" 00:29:32.120 ], 00:29:32.120 "product_name": "Logical Volume", 00:29:32.120 "block_size": 4096, 00:29:32.120 "num_blocks": 5242880, 00:29:32.120 "uuid": "8945b1ff-3a5a-4b3e-8cc9-40975ed9a7a8", 00:29:32.120 "assigned_rate_limits": { 00:29:32.120 "rw_ios_per_sec": 0, 00:29:32.120 "rw_mbytes_per_sec": 0, 00:29:32.120 "r_mbytes_per_sec": 0, 00:29:32.120 "w_mbytes_per_sec": 0 00:29:32.120 }, 00:29:32.120 "claimed": false, 00:29:32.120 "zoned": false, 00:29:32.120 "supported_io_types": { 00:29:32.120 "read": true, 00:29:32.120 "write": true, 00:29:32.120 "unmap": true, 00:29:32.120 "flush": false, 00:29:32.120 "reset": true, 00:29:32.120 "nvme_admin": false, 00:29:32.120 "nvme_io": false, 00:29:32.120 "nvme_io_md": false, 00:29:32.120 "write_zeroes": true, 00:29:32.120 "zcopy": false, 00:29:32.120 "get_zone_info": false, 00:29:32.120 "zone_management": false, 00:29:32.120 "zone_append": false, 00:29:32.120 "compare": false, 00:29:32.120 "compare_and_write": false, 00:29:32.120 "abort": false, 00:29:32.120 "seek_hole": true, 00:29:32.120 "seek_data": true, 00:29:32.120 "copy": false, 00:29:32.120 "nvme_iov_md": false 00:29:32.120 }, 00:29:32.120 "driver_specific": { 00:29:32.120 "lvol": { 00:29:32.120 "lvol_store_uuid": "be860034-09bc-4064-ab0c-a3f5274d1004", 00:29:32.120 "base_bdev": "basen1", 00:29:32.120 "thin_provision": true, 00:29:32.120 "num_allocated_clusters": 0, 00:29:32.120 "snapshot": false, 00:29:32.120 "clone": false, 00:29:32.120 "esnap_clone": false 00:29:32.120 } 00:29:32.120 } 00:29:32.120 } 00:29:32.120 ]' 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:32.120 10:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:32.380 10:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:32.380 10:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:32.380 10:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:32.639 10:35:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:32.639 10:35:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:32.639 10:35:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 8945b1ff-3a5a-4b3e-8cc9-40975ed9a7a8 -c cachen1p0 --l2p_dram_limit 2 00:29:32.900 [2024-11-29 10:35:12.231064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.900 [2024-11-29 10:35:12.231116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:32.900 [2024-11-29 10:35:12.231130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:32.900 [2024-11-29 10:35:12.231140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.900 [2024-11-29 10:35:12.231198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.900 [2024-11-29 10:35:12.231213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:32.900 [2024-11-29 10:35:12.231221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:29:32.900 [2024-11-29 10:35:12.231232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.900 [2024-11-29 10:35:12.231252] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:32.900 [2024-11-29 10:35:12.232036] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:32.900 [2024-11-29 10:35:12.232073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.900 [2024-11-29 10:35:12.232085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:32.900 [2024-11-29 10:35:12.232095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.825 ms 00:29:32.900 [2024-11-29 10:35:12.232105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.900 [2024-11-29 10:35:12.232763] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 1c8053d1-ca8b-4651-9e70-c19e5eabfd4e 00:29:32.900 [2024-11-29 10:35:12.233839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.900 [2024-11-29 10:35:12.233874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:32.900 [2024-11-29 10:35:12.233908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:29:32.900 [2024-11-29 10:35:12.233918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.900 [2024-11-29 10:35:12.239126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.900 [2024-11-29 10:35:12.239154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:32.900 [2024-11-29 10:35:12.239167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.163 ms 00:29:32.900 [2024-11-29 10:35:12.239175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.900 [2024-11-29 10:35:12.239218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.901 [2024-11-29 10:35:12.239231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:32.901 [2024-11-29 10:35:12.239243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:32.901 [2024-11-29 10:35:12.239250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.901 [2024-11-29 10:35:12.239309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.901 [2024-11-29 10:35:12.239320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:32.901 [2024-11-29 10:35:12.239329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:29:32.901 [2024-11-29 10:35:12.239336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.901 [2024-11-29 10:35:12.239360] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:32.901 [2024-11-29 10:35:12.240827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.901 [2024-11-29 10:35:12.240855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:32.901 [2024-11-29 10:35:12.240864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.475 ms 00:29:32.901 [2024-11-29 10:35:12.240873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.901 [2024-11-29 10:35:12.240897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.901 [2024-11-29 10:35:12.240907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:32.901 [2024-11-29 10:35:12.240914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:32.901 [2024-11-29 10:35:12.240925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.901 [2024-11-29 10:35:12.240941] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:32.901 [2024-11-29 10:35:12.241081] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:32.901 [2024-11-29 10:35:12.241092] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:32.901 [2024-11-29 10:35:12.241104] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:32.901 [2024-11-29 10:35:12.241113] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:32.901 [2024-11-29 10:35:12.241126] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:32.901 [2024-11-29 10:35:12.241134] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:32.901 [2024-11-29 10:35:12.241147] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:32.901 [2024-11-29 10:35:12.241154] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:32.901 [2024-11-29 10:35:12.241162] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:32.901 [2024-11-29 10:35:12.241169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.901 [2024-11-29 10:35:12.241178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:32.901 [2024-11-29 10:35:12.241186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.229 ms 00:29:32.901 [2024-11-29 10:35:12.241194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.901 [2024-11-29 10:35:12.241282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.901 [2024-11-29 10:35:12.241293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:32.901 [2024-11-29 10:35:12.241301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:29:32.901 [2024-11-29 10:35:12.241311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.901 [2024-11-29 10:35:12.241403] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:32.901 [2024-11-29 10:35:12.241421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:32.901 [2024-11-29 10:35:12.241429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:32.901 [2024-11-29 10:35:12.241438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.901 [2024-11-29 10:35:12.241449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:32.901 [2024-11-29 10:35:12.241457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:32.901 [2024-11-29 10:35:12.241464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:32.901 [2024-11-29 10:35:12.241475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:32.901 [2024-11-29 10:35:12.241483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:32.901 [2024-11-29 10:35:12.241492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.901 [2024-11-29 10:35:12.241499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:32.901 [2024-11-29 10:35:12.241508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:32.901 [2024-11-29 10:35:12.241515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.901 [2024-11-29 10:35:12.241527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:32.901 [2024-11-29 10:35:12.241534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:32.901 [2024-11-29 10:35:12.241543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.901 [2024-11-29 10:35:12.241550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:32.901 [2024-11-29 10:35:12.241560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:32.901 [2024-11-29 10:35:12.241567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.901 [2024-11-29 10:35:12.241576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:32.901 [2024-11-29 10:35:12.241583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:32.901 [2024-11-29 10:35:12.241594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:32.901 [2024-11-29 10:35:12.241602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:32.901 [2024-11-29 10:35:12.241611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:32.901 [2024-11-29 10:35:12.241618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:32.901 [2024-11-29 10:35:12.241627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:32.901 [2024-11-29 10:35:12.241635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:32.901 [2024-11-29 10:35:12.241643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:32.901 [2024-11-29 10:35:12.241651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:32.901 [2024-11-29 10:35:12.241662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:32.901 [2024-11-29 10:35:12.241669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:32.901 [2024-11-29 10:35:12.241678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:32.901 [2024-11-29 10:35:12.241686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:32.901 [2024-11-29 10:35:12.241696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.901 [2024-11-29 10:35:12.241703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:32.901 [2024-11-29 10:35:12.241712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:32.901 [2024-11-29 10:35:12.241720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.901 [2024-11-29 10:35:12.241728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:32.901 [2024-11-29 10:35:12.241736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:32.901 [2024-11-29 10:35:12.241745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.901 [2024-11-29 10:35:12.241752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:32.901 [2024-11-29 10:35:12.241761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:32.901 [2024-11-29 10:35:12.241769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.901 [2024-11-29 10:35:12.241777] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:32.901 [2024-11-29 10:35:12.241786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:32.901 [2024-11-29 10:35:12.241810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:32.901 [2024-11-29 10:35:12.241819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.901 [2024-11-29 10:35:12.241831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:32.901 [2024-11-29 10:35:12.241844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:32.901 [2024-11-29 10:35:12.241853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:32.901 [2024-11-29 10:35:12.241861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:32.901 [2024-11-29 10:35:12.241870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:32.901 [2024-11-29 10:35:12.241878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:32.901 [2024-11-29 10:35:12.241899] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:32.901 [2024-11-29 10:35:12.241910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:32.901 [2024-11-29 10:35:12.241923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:32.901 [2024-11-29 10:35:12.241931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:32.901 [2024-11-29 10:35:12.241941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:32.901 [2024-11-29 10:35:12.241949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:32.901 [2024-11-29 10:35:12.241959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:32.901 [2024-11-29 10:35:12.241967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:32.901 [2024-11-29 10:35:12.241978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:32.901 [2024-11-29 10:35:12.241987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:32.901 [2024-11-29 10:35:12.241997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:32.901 [2024-11-29 10:35:12.242005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:32.901 [2024-11-29 10:35:12.242015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:32.901 [2024-11-29 10:35:12.242023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:32.901 [2024-11-29 10:35:12.242033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:32.901 [2024-11-29 10:35:12.242041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:32.901 [2024-11-29 10:35:12.242050] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:32.901 [2024-11-29 10:35:12.242059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:32.901 [2024-11-29 10:35:12.242069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:32.901 [2024-11-29 10:35:12.242077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:32.901 [2024-11-29 10:35:12.242087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:32.901 [2024-11-29 10:35:12.242095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:32.901 [2024-11-29 10:35:12.242105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.901 [2024-11-29 10:35:12.242113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:32.901 [2024-11-29 10:35:12.242125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.766 ms 00:29:32.901 [2024-11-29 10:35:12.242133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.901 [2024-11-29 10:35:12.242174] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:32.901 [2024-11-29 10:35:12.242194] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:37.096 [2024-11-29 10:35:16.137813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.096 [2024-11-29 10:35:16.137866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:37.096 [2024-11-29 10:35:16.137890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3895.612 ms 00:29:37.096 [2024-11-29 10:35:16.137901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.096 [2024-11-29 10:35:16.146117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.096 [2024-11-29 10:35:16.146156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:37.096 [2024-11-29 10:35:16.146175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.138 ms 00:29:37.096 [2024-11-29 10:35:16.146184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.096 [2024-11-29 10:35:16.146230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.096 [2024-11-29 10:35:16.146239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:37.096 [2024-11-29 10:35:16.146249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:37.096 [2024-11-29 10:35:16.146256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.096 [2024-11-29 10:35:16.154898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.096 [2024-11-29 10:35:16.154930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:37.096 [2024-11-29 10:35:16.154942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.608 ms 00:29:37.096 [2024-11-29 10:35:16.154952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.096 [2024-11-29 10:35:16.154979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.096 [2024-11-29 10:35:16.154987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:37.096 [2024-11-29 10:35:16.154996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:37.096 [2024-11-29 10:35:16.155004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.096 [2024-11-29 10:35:16.155340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.096 [2024-11-29 10:35:16.155363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:37.096 [2024-11-29 10:35:16.155375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:29:37.096 [2024-11-29 10:35:16.155382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.097 [2024-11-29 10:35:16.155425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.097 [2024-11-29 10:35:16.155438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:37.097 [2024-11-29 10:35:16.155448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:37.097 [2024-11-29 10:35:16.155455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.097 [2024-11-29 10:35:16.160983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.097 [2024-11-29 10:35:16.161013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:37.097 [2024-11-29 10:35:16.161024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.507 ms 00:29:37.097 [2024-11-29 10:35:16.161031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.097 [2024-11-29 10:35:16.182781] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:37.097 [2024-11-29 10:35:16.183972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.097 [2024-11-29 10:35:16.184026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:37.097 [2024-11-29 10:35:16.184045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.881 ms 00:29:37.097 [2024-11-29 10:35:16.184061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.097 [2024-11-29 10:35:16.199330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.097 [2024-11-29 10:35:16.199371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:37.097 [2024-11-29 10:35:16.199381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.213 ms 00:29:37.097 [2024-11-29 10:35:16.199393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.097 [2024-11-29 10:35:16.199469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.097 [2024-11-29 10:35:16.199481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:37.097 [2024-11-29 10:35:16.199490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:37.097 [2024-11-29 10:35:16.199499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.097 [2024-11-29 10:35:16.202662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.097 [2024-11-29 10:35:16.202697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:37.097 [2024-11-29 10:35:16.202709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.145 ms 00:29:37.097 [2024-11-29 10:35:16.202718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.097 [2024-11-29 10:35:16.206009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.097 [2024-11-29 10:35:16.206041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:37.097 [2024-11-29 10:35:16.206050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.260 ms 00:29:37.097 [2024-11-29 10:35:16.206059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.097 [2024-11-29 10:35:16.206347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.097 [2024-11-29 10:35:16.206358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:37.097 [2024-11-29 10:35:16.206367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.257 ms 00:29:37.097 [2024-11-29 10:35:16.206377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.097 [2024-11-29 10:35:16.235897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.097 [2024-11-29 10:35:16.235936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:37.097 [2024-11-29 10:35:16.235953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.502 ms 00:29:37.097 [2024-11-29 10:35:16.235962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.097 [2024-11-29 10:35:16.240344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.097 [2024-11-29 10:35:16.240378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:37.097 [2024-11-29 10:35:16.240388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.335 ms 00:29:37.097 [2024-11-29 10:35:16.240398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.097 [2024-11-29 10:35:16.244155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.097 [2024-11-29 10:35:16.244187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:37.097 [2024-11-29 10:35:16.244196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.726 ms 00:29:37.097 [2024-11-29 10:35:16.244205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.097 [2024-11-29 10:35:16.248224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.097 [2024-11-29 10:35:16.248257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:37.097 [2024-11-29 10:35:16.248266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.988 ms 00:29:37.097 [2024-11-29 10:35:16.248276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.097 [2024-11-29 10:35:16.248311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.097 [2024-11-29 10:35:16.248321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:37.097 [2024-11-29 10:35:16.248329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:37.097 [2024-11-29 10:35:16.248338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.097 [2024-11-29 10:35:16.248398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.097 [2024-11-29 10:35:16.248408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:37.097 [2024-11-29 10:35:16.248416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:29:37.097 [2024-11-29 10:35:16.248426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.097 [2024-11-29 10:35:16.249319] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4017.867 ms, result 0 00:29:37.097 { 00:29:37.097 "name": "ftl", 00:29:37.097 "uuid": "1c8053d1-ca8b-4651-9e70-c19e5eabfd4e" 00:29:37.097 } 00:29:37.097 10:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:37.097 [2024-11-29 10:35:16.458423] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.097 10:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:37.357 10:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:37.617 [2024-11-29 10:35:16.846825] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:37.617 10:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:37.617 [2024-11-29 10:35:17.039105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:37.617 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:38.188 Fill FTL, iteration 1 00:29:38.188 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:38.188 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:38.188 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:38.188 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:38.188 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:38.188 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:38.188 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:38.188 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:38.188 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:38.188 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:38.188 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:38.188 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:38.188 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:38.188 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:38.188 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:38.188 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:38.189 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=94033 00:29:38.189 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:38.189 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:38.189 10:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 94033 /var/tmp/spdk.tgt.sock 00:29:38.189 10:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94033 ']' 00:29:38.189 10:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:38.189 10:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:38.189 10:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:38.189 10:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.189 10:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:38.189 [2024-11-29 10:35:17.446015] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:38.189 [2024-11-29 10:35:17.446132] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94033 ] 00:29:38.189 [2024-11-29 10:35:17.590429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.189 [2024-11-29 10:35:17.610356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.132 10:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.132 10:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:39.132 10:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:39.132 ftln1 00:29:39.132 10:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:39.133 10:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:39.391 10:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:39.391 10:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 94033 00:29:39.391 10:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94033 ']' 00:29:39.391 10:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94033 00:29:39.391 10:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:39.391 10:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:39.391 10:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94033 00:29:39.391 killing process with pid 94033 00:29:39.391 10:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:39.391 10:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:39.392 10:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94033' 00:29:39.392 10:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94033 00:29:39.392 10:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94033 00:29:39.964 10:35:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:39.964 10:35:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:39.964 [2024-11-29 10:35:19.191250] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:39.964 [2024-11-29 10:35:19.191366] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94059 ] 00:29:39.964 [2024-11-29 10:35:19.335559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.964 [2024-11-29 10:35:19.361707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.351  [2024-11-29T10:35:21.757Z] Copying: 191/1024 [MB] (191 MBps) [2024-11-29T10:35:22.698Z] Copying: 373/1024 [MB] (182 MBps) [2024-11-29T10:35:23.643Z] Copying: 562/1024 [MB] (189 MBps) [2024-11-29T10:35:24.586Z] Copying: 754/1024 [MB] (192 MBps) [2024-11-29T10:35:25.153Z] Copying: 922/1024 [MB] (168 MBps) [2024-11-29T10:35:25.153Z] Copying: 1024/1024 [MB] (average 188 MBps) 00:29:45.688 00:29:45.946 Calculate MD5 checksum, iteration 1 00:29:45.946 10:35:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:45.946 10:35:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:45.946 10:35:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:45.946 10:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:45.946 10:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:45.946 10:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:45.946 10:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:45.946 10:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:45.946 [2024-11-29 10:35:25.221583] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:45.946 [2024-11-29 10:35:25.221688] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94123 ] 00:29:45.946 [2024-11-29 10:35:25.361671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.946 [2024-11-29 10:35:25.379104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.320  [2024-11-29T10:35:27.042Z] Copying: 699/1024 [MB] (699 MBps) [2024-11-29T10:35:27.303Z] Copying: 1024/1024 [MB] (average 694 MBps) 00:29:47.838 00:29:47.838 10:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:47.838 10:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:50.423 10:35:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:50.423 Fill FTL, iteration 2 00:29:50.423 10:35:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=fc1653ef2188a221f7ecf03037594574 00:29:50.423 10:35:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:50.423 10:35:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:50.423 10:35:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:50.423 10:35:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:50.423 10:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:50.423 10:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:50.423 10:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:50.423 10:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:50.423 10:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:50.423 [2024-11-29 10:35:29.447071] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:50.423 [2024-11-29 10:35:29.447178] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94178 ] 00:29:50.423 [2024-11-29 10:35:29.584737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.423 [2024-11-29 10:35:29.608077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.358  [2024-11-29T10:35:32.210Z] Copying: 240/1024 [MB] (240 MBps) [2024-11-29T10:35:33.151Z] Copying: 426/1024 [MB] (186 MBps) [2024-11-29T10:35:34.088Z] Copying: 625/1024 [MB] (199 MBps) [2024-11-29T10:35:35.025Z] Copying: 826/1024 [MB] (201 MBps) [2024-11-29T10:35:35.025Z] Copying: 1024/1024 [MB] (average 207 MBps) 00:29:55.560 00:29:55.560 Calculate MD5 checksum, iteration 2 00:29:55.560 10:35:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:55.560 10:35:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:55.560 10:35:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:55.560 10:35:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:55.560 10:35:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:55.560 10:35:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:55.560 10:35:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:55.560 10:35:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:55.560 [2024-11-29 10:35:34.948921] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:55.560 [2024-11-29 10:35:34.949036] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94232 ] 00:29:55.823 [2024-11-29 10:35:35.094024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.823 [2024-11-29 10:35:35.129268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.204  [2024-11-29T10:35:37.237Z] Copying: 650/1024 [MB] (650 MBps) [2024-11-29T10:35:37.808Z] Copying: 1024/1024 [MB] (average 637 MBps) 00:29:58.343 00:29:58.343 10:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:58.343 10:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:00.913 10:35:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:00.913 10:35:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=697dd705f4ae9c8c9397e11e6690096b 00:30:00.913 10:35:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:00.913 10:35:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:00.913 10:35:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:00.913 [2024-11-29 10:35:40.126609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.913 [2024-11-29 10:35:40.126683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:00.913 [2024-11-29 10:35:40.126705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:00.913 [2024-11-29 10:35:40.126727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.913 [2024-11-29 10:35:40.126768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.913 [2024-11-29 10:35:40.126782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:00.913 [2024-11-29 10:35:40.126797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:00.913 [2024-11-29 10:35:40.126828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.913 [2024-11-29 10:35:40.126861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.913 [2024-11-29 10:35:40.126887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:00.913 [2024-11-29 10:35:40.126900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:00.913 [2024-11-29 10:35:40.126918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.913 [2024-11-29 10:35:40.127019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.379 ms, result 0 00:30:00.913 true 00:30:00.913 10:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:00.913 { 00:30:00.913 "name": "ftl", 00:30:00.913 "properties": [ 00:30:00.913 { 00:30:00.913 "name": "superblock_version", 00:30:00.913 "value": 5, 00:30:00.913 "read-only": true 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "name": "base_device", 00:30:00.913 "bands": [ 00:30:00.913 { 00:30:00.913 "id": 0, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 1, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 2, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 3, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 4, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 5, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 6, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 7, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 8, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 9, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 10, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 11, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 12, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 13, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 14, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 15, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 16, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 17, 00:30:00.913 "state": "FREE", 00:30:00.913 "validity": 0.0 00:30:00.913 } 00:30:00.913 ], 00:30:00.913 "read-only": true 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "name": "cache_device", 00:30:00.913 "type": "bdev", 00:30:00.913 "chunks": [ 00:30:00.913 { 00:30:00.913 "id": 0, 00:30:00.913 "state": "INACTIVE", 00:30:00.913 "utilization": 0.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 1, 00:30:00.913 "state": "CLOSED", 00:30:00.913 "utilization": 1.0 00:30:00.913 }, 00:30:00.913 { 00:30:00.913 "id": 2, 00:30:00.913 "state": "CLOSED", 00:30:00.914 "utilization": 1.0 00:30:00.914 }, 00:30:00.914 { 00:30:00.914 "id": 3, 00:30:00.914 "state": "OPEN", 00:30:00.914 "utilization": 0.001953125 00:30:00.914 }, 00:30:00.914 { 00:30:00.914 "id": 4, 00:30:00.914 "state": "OPEN", 00:30:00.914 "utilization": 0.0 00:30:00.914 } 00:30:00.914 ], 00:30:00.914 "read-only": true 00:30:00.914 }, 00:30:00.914 { 00:30:00.914 "name": "verbose_mode", 00:30:00.914 "value": true, 00:30:00.914 "unit": "", 00:30:00.914 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:00.914 }, 00:30:00.914 { 00:30:00.914 "name": "prep_upgrade_on_shutdown", 00:30:00.914 "value": false, 00:30:00.914 "unit": "", 00:30:00.914 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:00.914 } 00:30:00.914 ] 00:30:00.914 } 00:30:00.914 10:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:01.174 [2024-11-29 10:35:40.555117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.174 [2024-11-29 10:35:40.555199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:01.174 [2024-11-29 10:35:40.555220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:01.174 [2024-11-29 10:35:40.555234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.174 [2024-11-29 10:35:40.555272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.174 [2024-11-29 10:35:40.555286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:01.174 [2024-11-29 10:35:40.555297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:01.174 [2024-11-29 10:35:40.555309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.174 [2024-11-29 10:35:40.555338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.174 [2024-11-29 10:35:40.555350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:01.174 [2024-11-29 10:35:40.555364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:01.174 [2024-11-29 10:35:40.555376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.174 [2024-11-29 10:35:40.555484] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.327 ms, result 0 00:30:01.174 true 00:30:01.174 10:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:01.174 10:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:01.174 10:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:01.483 10:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:01.483 10:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:01.483 10:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:01.751 [2024-11-29 10:35:41.067740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.751 [2024-11-29 10:35:41.067822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:01.751 [2024-11-29 10:35:41.067839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:01.752 [2024-11-29 10:35:41.067848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.752 [2024-11-29 10:35:41.067876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.752 [2024-11-29 10:35:41.067885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:01.752 [2024-11-29 10:35:41.067894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:01.752 [2024-11-29 10:35:41.067902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.752 [2024-11-29 10:35:41.067921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.752 [2024-11-29 10:35:41.067930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:01.752 [2024-11-29 10:35:41.067937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:01.752 [2024-11-29 10:35:41.067945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.752 [2024-11-29 10:35:41.068010] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.262 ms, result 0 00:30:01.752 true 00:30:01.752 10:35:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:02.012 { 00:30:02.012 "name": "ftl", 00:30:02.012 "properties": [ 00:30:02.012 { 00:30:02.012 "name": "superblock_version", 00:30:02.012 "value": 5, 00:30:02.012 "read-only": true 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "name": "base_device", 00:30:02.012 "bands": [ 00:30:02.012 { 00:30:02.012 "id": 0, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 1, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 2, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 3, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 4, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 5, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 6, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 7, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 8, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 9, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 10, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 11, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 12, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 13, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 14, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 15, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 16, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 17, 00:30:02.012 "state": "FREE", 00:30:02.012 "validity": 0.0 00:30:02.012 } 00:30:02.012 ], 00:30:02.012 "read-only": true 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "name": "cache_device", 00:30:02.012 "type": "bdev", 00:30:02.012 "chunks": [ 00:30:02.012 { 00:30:02.012 "id": 0, 00:30:02.012 "state": "INACTIVE", 00:30:02.012 "utilization": 0.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 1, 00:30:02.012 "state": "CLOSED", 00:30:02.012 "utilization": 1.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 2, 00:30:02.012 "state": "CLOSED", 00:30:02.012 "utilization": 1.0 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "id": 3, 00:30:02.012 "state": "OPEN", 00:30:02.012 "utilization": 0.001953125 00:30:02.012 }, 00:30:02.013 { 00:30:02.013 "id": 4, 00:30:02.013 "state": "OPEN", 00:30:02.013 "utilization": 0.0 00:30:02.013 } 00:30:02.013 ], 00:30:02.013 "read-only": true 00:30:02.013 }, 00:30:02.013 { 00:30:02.013 "name": "verbose_mode", 00:30:02.013 "value": true, 00:30:02.013 "unit": "", 00:30:02.013 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:02.013 }, 00:30:02.013 { 00:30:02.013 "name": "prep_upgrade_on_shutdown", 00:30:02.013 "value": true, 00:30:02.013 "unit": "", 00:30:02.013 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:02.013 } 00:30:02.013 ] 00:30:02.013 } 00:30:02.013 10:35:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:02.013 10:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93909 ]] 00:30:02.013 10:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93909 00:30:02.013 10:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93909 ']' 00:30:02.013 10:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93909 00:30:02.013 10:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:02.013 10:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:02.013 10:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93909 00:30:02.013 killing process with pid 93909 00:30:02.013 10:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:02.013 10:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:02.013 10:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93909' 00:30:02.013 10:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93909 00:30:02.013 10:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93909 00:30:02.013 [2024-11-29 10:35:41.461115] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:02.013 [2024-11-29 10:35:41.465236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.013 [2024-11-29 10:35:41.465282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:02.013 [2024-11-29 10:35:41.465295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:02.013 [2024-11-29 10:35:41.465304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.013 [2024-11-29 10:35:41.465327] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:02.013 [2024-11-29 10:35:41.465849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.013 [2024-11-29 10:35:41.465898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:02.013 [2024-11-29 10:35:41.465908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.508 ms 00:30:02.013 [2024-11-29 10:35:41.465916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.019 [2024-11-29 10:35:50.697266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.019 [2024-11-29 10:35:50.697329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:12.019 [2024-11-29 10:35:50.697342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9231.297 ms 00:30:12.019 [2024-11-29 10:35:50.697350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.019 [2024-11-29 10:35:50.698337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.019 [2024-11-29 10:35:50.698356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:12.019 [2024-11-29 10:35:50.698364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.975 ms 00:30:12.019 [2024-11-29 10:35:50.698370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.019 [2024-11-29 10:35:50.699258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.019 [2024-11-29 10:35:50.699278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:12.019 [2024-11-29 10:35:50.699290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.869 ms 00:30:12.019 [2024-11-29 10:35:50.699297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.019 [2024-11-29 10:35:50.700660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.019 [2024-11-29 10:35:50.700689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:12.019 [2024-11-29 10:35:50.700698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.334 ms 00:30:12.019 [2024-11-29 10:35:50.700704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.019 [2024-11-29 10:35:50.702807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.019 [2024-11-29 10:35:50.702838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:12.019 [2024-11-29 10:35:50.702846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.071 ms 00:30:12.019 [2024-11-29 10:35:50.702852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.019 [2024-11-29 10:35:50.702910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.019 [2024-11-29 10:35:50.702917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:12.019 [2024-11-29 10:35:50.702924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:30:12.019 [2024-11-29 10:35:50.702930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.019 [2024-11-29 10:35:50.703724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.019 [2024-11-29 10:35:50.703751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:12.019 [2024-11-29 10:35:50.703759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.781 ms 00:30:12.019 [2024-11-29 10:35:50.703764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.019 [2024-11-29 10:35:50.704705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.019 [2024-11-29 10:35:50.704732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:12.019 [2024-11-29 10:35:50.704739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.918 ms 00:30:12.019 [2024-11-29 10:35:50.704745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.019 [2024-11-29 10:35:50.705558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.019 [2024-11-29 10:35:50.705585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:12.019 [2024-11-29 10:35:50.705592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.791 ms 00:30:12.019 [2024-11-29 10:35:50.705598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.019 [2024-11-29 10:35:50.706466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.019 [2024-11-29 10:35:50.706492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:12.019 [2024-11-29 10:35:50.706499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.825 ms 00:30:12.019 [2024-11-29 10:35:50.706505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.019 [2024-11-29 10:35:50.706526] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:12.019 [2024-11-29 10:35:50.706537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:12.019 [2024-11-29 10:35:50.706545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:12.019 [2024-11-29 10:35:50.706552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:12.019 [2024-11-29 10:35:50.706558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:12.019 [2024-11-29 10:35:50.706564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:12.019 [2024-11-29 10:35:50.706570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:12.019 [2024-11-29 10:35:50.706576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:12.019 [2024-11-29 10:35:50.706582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:12.019 [2024-11-29 10:35:50.706588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:12.019 [2024-11-29 10:35:50.706593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:12.019 [2024-11-29 10:35:50.706599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:12.019 [2024-11-29 10:35:50.706604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:12.019 [2024-11-29 10:35:50.706610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:12.019 [2024-11-29 10:35:50.706615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:12.019 [2024-11-29 10:35:50.706621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:12.019 [2024-11-29 10:35:50.706626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:12.019 [2024-11-29 10:35:50.706632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:12.019 [2024-11-29 10:35:50.706637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:12.019 [2024-11-29 10:35:50.706645] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:12.019 [2024-11-29 10:35:50.706651] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 1c8053d1-ca8b-4651-9e70-c19e5eabfd4e 00:30:12.019 [2024-11-29 10:35:50.706657] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:12.019 [2024-11-29 10:35:50.706662] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:12.019 [2024-11-29 10:35:50.706667] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:12.019 [2024-11-29 10:35:50.706677] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:12.019 [2024-11-29 10:35:50.706683] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:12.019 [2024-11-29 10:35:50.706693] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:12.019 [2024-11-29 10:35:50.706699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:12.019 [2024-11-29 10:35:50.706704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:12.019 [2024-11-29 10:35:50.706709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:12.019 [2024-11-29 10:35:50.706715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.019 [2024-11-29 10:35:50.706721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:12.019 [2024-11-29 10:35:50.706727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.189 ms 00:30:12.020 [2024-11-29 10:35:50.706733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.020 [2024-11-29 10:35:50.707975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.020 [2024-11-29 10:35:50.708001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:12.020 [2024-11-29 10:35:50.708008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.230 ms 00:30:12.020 [2024-11-29 10:35:50.708018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.020 [2024-11-29 10:35:50.708082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.020 [2024-11-29 10:35:50.708088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:12.020 [2024-11-29 10:35:50.708094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:30:12.020 [2024-11-29 10:35:50.708100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.020 [2024-11-29 10:35:50.712475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.020 [2024-11-29 10:35:50.712508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:12.020 [2024-11-29 10:35:50.712519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.020 [2024-11-29 10:35:50.712526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.020 [2024-11-29 10:35:50.712547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.020 [2024-11-29 10:35:50.712553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:12.020 [2024-11-29 10:35:50.712559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.020 [2024-11-29 10:35:50.712565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.020 [2024-11-29 10:35:50.712607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.020 [2024-11-29 10:35:50.712617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:12.020 [2024-11-29 10:35:50.712623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.020 [2024-11-29 10:35:50.712629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.020 [2024-11-29 10:35:50.712641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.020 [2024-11-29 10:35:50.712648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:12.020 [2024-11-29 10:35:50.712654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.020 [2024-11-29 10:35:50.712662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.020 [2024-11-29 10:35:50.720562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.020 [2024-11-29 10:35:50.720599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:12.020 [2024-11-29 10:35:50.720607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.020 [2024-11-29 10:35:50.720613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.020 [2024-11-29 10:35:50.726983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.020 [2024-11-29 10:35:50.727016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:12.020 [2024-11-29 10:35:50.727025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.020 [2024-11-29 10:35:50.727031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.020 [2024-11-29 10:35:50.727080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.020 [2024-11-29 10:35:50.727088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:12.020 [2024-11-29 10:35:50.727098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.020 [2024-11-29 10:35:50.727104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.020 [2024-11-29 10:35:50.727128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.020 [2024-11-29 10:35:50.727135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:12.020 [2024-11-29 10:35:50.727142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.020 [2024-11-29 10:35:50.727147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.020 [2024-11-29 10:35:50.727197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.020 [2024-11-29 10:35:50.727204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:12.020 [2024-11-29 10:35:50.727210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.020 [2024-11-29 10:35:50.727219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.020 [2024-11-29 10:35:50.727241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.020 [2024-11-29 10:35:50.727248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:12.020 [2024-11-29 10:35:50.727254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.020 [2024-11-29 10:35:50.727261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.020 [2024-11-29 10:35:50.727294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.020 [2024-11-29 10:35:50.727301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:12.020 [2024-11-29 10:35:50.727307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.020 [2024-11-29 10:35:50.727315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.020 [2024-11-29 10:35:50.727349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.020 [2024-11-29 10:35:50.727361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:12.020 [2024-11-29 10:35:50.727369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.020 [2024-11-29 10:35:50.727374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.020 [2024-11-29 10:35:50.727470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9262.189 ms, result 0 00:30:15.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.411 10:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:15.411 10:35:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:15.411 10:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:15.411 10:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:15.411 10:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:15.411 10:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94421 00:30:15.411 10:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:15.411 10:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94421 00:30:15.411 10:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94421 ']' 00:30:15.411 10:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:15.411 10:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.411 10:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.411 10:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.411 10:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.411 10:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:15.671 [2024-11-29 10:35:54.932941] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:30:15.671 [2024-11-29 10:35:54.933071] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94421 ] 00:30:15.671 [2024-11-29 10:35:55.079244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.671 [2024-11-29 10:35:55.100670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.239 [2024-11-29 10:35:55.435554] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:16.240 [2024-11-29 10:35:55.435643] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:16.240 [2024-11-29 10:35:55.588692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.240 [2024-11-29 10:35:55.588750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:16.240 [2024-11-29 10:35:55.588768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:16.240 [2024-11-29 10:35:55.588777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.240 [2024-11-29 10:35:55.588864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.240 [2024-11-29 10:35:55.588878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:16.240 [2024-11-29 10:35:55.588888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:30:16.240 [2024-11-29 10:35:55.588895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.240 [2024-11-29 10:35:55.588919] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:16.240 [2024-11-29 10:35:55.589191] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:16.240 [2024-11-29 10:35:55.589214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.240 [2024-11-29 10:35:55.589222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:16.240 [2024-11-29 10:35:55.589232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.300 ms 00:30:16.240 [2024-11-29 10:35:55.589240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.240 [2024-11-29 10:35:55.591049] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:16.240 [2024-11-29 10:35:55.594957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.240 [2024-11-29 10:35:55.595013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:16.240 [2024-11-29 10:35:55.595025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.911 ms 00:30:16.240 [2024-11-29 10:35:55.595034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.240 [2024-11-29 10:35:55.595105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.240 [2024-11-29 10:35:55.595115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:16.240 [2024-11-29 10:35:55.595130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:30:16.240 [2024-11-29 10:35:55.595137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.240 [2024-11-29 10:35:55.603166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.240 [2024-11-29 10:35:55.603210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:16.240 [2024-11-29 10:35:55.603221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.981 ms 00:30:16.240 [2024-11-29 10:35:55.603229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.240 [2024-11-29 10:35:55.603282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.240 [2024-11-29 10:35:55.603292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:16.240 [2024-11-29 10:35:55.603300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:30:16.240 [2024-11-29 10:35:55.603308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.240 [2024-11-29 10:35:55.603371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.240 [2024-11-29 10:35:55.603389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:16.240 [2024-11-29 10:35:55.603399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:16.240 [2024-11-29 10:35:55.603409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.240 [2024-11-29 10:35:55.603434] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:16.240 [2024-11-29 10:35:55.605460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.240 [2024-11-29 10:35:55.605499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:16.240 [2024-11-29 10:35:55.605510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.031 ms 00:30:16.240 [2024-11-29 10:35:55.605518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.240 [2024-11-29 10:35:55.605553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.240 [2024-11-29 10:35:55.605567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:16.240 [2024-11-29 10:35:55.605575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:16.240 [2024-11-29 10:35:55.605583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.240 [2024-11-29 10:35:55.605608] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:16.240 [2024-11-29 10:35:55.605633] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:16.240 [2024-11-29 10:35:55.605671] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:16.240 [2024-11-29 10:35:55.605693] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:16.240 [2024-11-29 10:35:55.605820] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:16.240 [2024-11-29 10:35:55.605831] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:16.240 [2024-11-29 10:35:55.605843] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:16.240 [2024-11-29 10:35:55.605854] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:16.240 [2024-11-29 10:35:55.605863] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:16.240 [2024-11-29 10:35:55.605882] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:16.240 [2024-11-29 10:35:55.605891] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:16.240 [2024-11-29 10:35:55.605899] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:16.240 [2024-11-29 10:35:55.605907] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:16.240 [2024-11-29 10:35:55.605915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.240 [2024-11-29 10:35:55.605925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:16.240 [2024-11-29 10:35:55.605933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:30:16.240 [2024-11-29 10:35:55.605940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.240 [2024-11-29 10:35:55.606026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.240 [2024-11-29 10:35:55.606039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:16.240 [2024-11-29 10:35:55.606047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:16.240 [2024-11-29 10:35:55.606056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.240 [2024-11-29 10:35:55.606160] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:16.240 [2024-11-29 10:35:55.606171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:16.240 [2024-11-29 10:35:55.606183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:16.240 [2024-11-29 10:35:55.606192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.240 [2024-11-29 10:35:55.606201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:16.240 [2024-11-29 10:35:55.606208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:16.240 [2024-11-29 10:35:55.606217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:16.240 [2024-11-29 10:35:55.606226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:16.240 [2024-11-29 10:35:55.606234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:16.240 [2024-11-29 10:35:55.606242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.240 [2024-11-29 10:35:55.606250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:16.240 [2024-11-29 10:35:55.606258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:16.240 [2024-11-29 10:35:55.606267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.240 [2024-11-29 10:35:55.606275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:16.240 [2024-11-29 10:35:55.606283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:16.240 [2024-11-29 10:35:55.606296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.240 [2024-11-29 10:35:55.606304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:16.240 [2024-11-29 10:35:55.606312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:16.240 [2024-11-29 10:35:55.606319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.240 [2024-11-29 10:35:55.606327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:16.240 [2024-11-29 10:35:55.606335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:16.240 [2024-11-29 10:35:55.606343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:16.240 [2024-11-29 10:35:55.606351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:16.240 [2024-11-29 10:35:55.606359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:16.240 [2024-11-29 10:35:55.606366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:16.240 [2024-11-29 10:35:55.606373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:16.240 [2024-11-29 10:35:55.606381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:16.240 [2024-11-29 10:35:55.606389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:16.240 [2024-11-29 10:35:55.606397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:16.240 [2024-11-29 10:35:55.606404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:16.240 [2024-11-29 10:35:55.606411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:16.240 [2024-11-29 10:35:55.606422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:16.240 [2024-11-29 10:35:55.606430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:16.240 [2024-11-29 10:35:55.606437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.240 [2024-11-29 10:35:55.606445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:16.240 [2024-11-29 10:35:55.606452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:16.240 [2024-11-29 10:35:55.606460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.240 [2024-11-29 10:35:55.606467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:16.240 [2024-11-29 10:35:55.606474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:16.240 [2024-11-29 10:35:55.606482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.240 [2024-11-29 10:35:55.606490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:16.240 [2024-11-29 10:35:55.606497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:16.240 [2024-11-29 10:35:55.606504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.240 [2024-11-29 10:35:55.606512] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:16.240 [2024-11-29 10:35:55.606523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:16.240 [2024-11-29 10:35:55.606532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:16.240 [2024-11-29 10:35:55.606540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.240 [2024-11-29 10:35:55.606554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:16.240 [2024-11-29 10:35:55.606562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:16.240 [2024-11-29 10:35:55.606569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:16.240 [2024-11-29 10:35:55.606579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:16.240 [2024-11-29 10:35:55.606587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:16.240 [2024-11-29 10:35:55.606595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:16.240 [2024-11-29 10:35:55.606604] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:16.240 [2024-11-29 10:35:55.606615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.240 [2024-11-29 10:35:55.606624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:16.240 [2024-11-29 10:35:55.606633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:16.241 [2024-11-29 10:35:55.606642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:16.241 [2024-11-29 10:35:55.606651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:16.241 [2024-11-29 10:35:55.606658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:16.241 [2024-11-29 10:35:55.606667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:16.241 [2024-11-29 10:35:55.606676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:16.241 [2024-11-29 10:35:55.606685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:16.241 [2024-11-29 10:35:55.606696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:16.241 [2024-11-29 10:35:55.606704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:16.241 [2024-11-29 10:35:55.606713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:16.241 [2024-11-29 10:35:55.606721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:16.241 [2024-11-29 10:35:55.606729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:16.241 [2024-11-29 10:35:55.606738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:16.241 [2024-11-29 10:35:55.606745] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:16.241 [2024-11-29 10:35:55.606755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.241 [2024-11-29 10:35:55.606764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:16.241 [2024-11-29 10:35:55.606773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:16.241 [2024-11-29 10:35:55.606781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:16.241 [2024-11-29 10:35:55.606795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:16.241 [2024-11-29 10:35:55.606818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.241 [2024-11-29 10:35:55.606836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:16.241 [2024-11-29 10:35:55.606846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.727 ms 00:30:16.241 [2024-11-29 10:35:55.606855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.241 [2024-11-29 10:35:55.606903] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:16.241 [2024-11-29 10:35:55.606957] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:19.542 [2024-11-29 10:35:58.980580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.542 [2024-11-29 10:35:58.980668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:19.542 [2024-11-29 10:35:58.980685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3373.668 ms 00:30:19.542 [2024-11-29 10:35:58.980704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.542 [2024-11-29 10:35:58.994243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.542 [2024-11-29 10:35:58.994297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:19.542 [2024-11-29 10:35:58.994321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.402 ms 00:30:19.542 [2024-11-29 10:35:58.994330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.542 [2024-11-29 10:35:58.994431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.542 [2024-11-29 10:35:58.994443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:19.542 [2024-11-29 10:35:58.994457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:19.542 [2024-11-29 10:35:58.994468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.805 [2024-11-29 10:35:59.007523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.805 [2024-11-29 10:35:59.007578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:19.805 [2024-11-29 10:35:59.007592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.008 ms 00:30:19.805 [2024-11-29 10:35:59.007600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.805 [2024-11-29 10:35:59.007636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.805 [2024-11-29 10:35:59.007645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:19.805 [2024-11-29 10:35:59.007665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:19.805 [2024-11-29 10:35:59.007673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.805 [2024-11-29 10:35:59.008229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.805 [2024-11-29 10:35:59.008265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:19.805 [2024-11-29 10:35:59.008278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.497 ms 00:30:19.805 [2024-11-29 10:35:59.008288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.805 [2024-11-29 10:35:59.008340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.805 [2024-11-29 10:35:59.008351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:19.805 [2024-11-29 10:35:59.008361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:30:19.805 [2024-11-29 10:35:59.008373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.805 [2024-11-29 10:35:59.016703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.805 [2024-11-29 10:35:59.016749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:19.805 [2024-11-29 10:35:59.016769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.302 ms 00:30:19.805 [2024-11-29 10:35:59.016781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.805 [2024-11-29 10:35:59.032902] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:19.805 [2024-11-29 10:35:59.033082] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:19.805 [2024-11-29 10:35:59.033118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.805 [2024-11-29 10:35:59.033129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:19.805 [2024-11-29 10:35:59.033141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.227 ms 00:30:19.805 [2024-11-29 10:35:59.033150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.805 [2024-11-29 10:35:59.039994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.805 [2024-11-29 10:35:59.040065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:19.805 [2024-11-29 10:35:59.040083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.367 ms 00:30:19.805 [2024-11-29 10:35:59.040095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.805 [2024-11-29 10:35:59.042961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.805 [2024-11-29 10:35:59.043018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:19.805 [2024-11-29 10:35:59.043032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.792 ms 00:30:19.805 [2024-11-29 10:35:59.043043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.805 [2024-11-29 10:35:59.045767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.805 [2024-11-29 10:35:59.045837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:19.805 [2024-11-29 10:35:59.045853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.664 ms 00:30:19.805 [2024-11-29 10:35:59.045865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.806 [2024-11-29 10:35:59.046409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.806 [2024-11-29 10:35:59.046443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:19.806 [2024-11-29 10:35:59.046459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.395 ms 00:30:19.806 [2024-11-29 10:35:59.046472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.806 [2024-11-29 10:35:59.069088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.806 [2024-11-29 10:35:59.069158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:19.806 [2024-11-29 10:35:59.069172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.582 ms 00:30:19.806 [2024-11-29 10:35:59.069181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.806 [2024-11-29 10:35:59.077960] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:19.806 [2024-11-29 10:35:59.078969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.806 [2024-11-29 10:35:59.079008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:19.806 [2024-11-29 10:35:59.079019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.725 ms 00:30:19.806 [2024-11-29 10:35:59.079028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.806 [2024-11-29 10:35:59.079122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.806 [2024-11-29 10:35:59.079138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:19.806 [2024-11-29 10:35:59.079148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:19.806 [2024-11-29 10:35:59.079156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.806 [2024-11-29 10:35:59.079206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.806 [2024-11-29 10:35:59.079216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:19.806 [2024-11-29 10:35:59.079229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:19.806 [2024-11-29 10:35:59.079238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.806 [2024-11-29 10:35:59.079262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.806 [2024-11-29 10:35:59.079272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:19.806 [2024-11-29 10:35:59.079280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:19.806 [2024-11-29 10:35:59.079288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.806 [2024-11-29 10:35:59.079335] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:19.806 [2024-11-29 10:35:59.079347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.806 [2024-11-29 10:35:59.079355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:19.806 [2024-11-29 10:35:59.079366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:19.806 [2024-11-29 10:35:59.079378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.806 [2024-11-29 10:35:59.084855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.806 [2024-11-29 10:35:59.084902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:19.806 [2024-11-29 10:35:59.084922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.455 ms 00:30:19.806 [2024-11-29 10:35:59.084931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.806 [2024-11-29 10:35:59.085016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.806 [2024-11-29 10:35:59.085027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:19.806 [2024-11-29 10:35:59.085036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:30:19.806 [2024-11-29 10:35:59.085048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.806 [2024-11-29 10:35:59.086635] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3497.475 ms, result 0 00:30:19.806 [2024-11-29 10:35:59.099411] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.806 [2024-11-29 10:35:59.115439] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:19.806 [2024-11-29 10:35:59.123536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:19.806 10:35:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:19.806 10:35:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:19.806 10:35:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:19.806 10:35:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:19.806 10:35:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:20.068 [2024-11-29 10:35:59.407690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.068 [2024-11-29 10:35:59.407755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:20.068 [2024-11-29 10:35:59.407772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:20.068 [2024-11-29 10:35:59.407781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.068 [2024-11-29 10:35:59.407822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.068 [2024-11-29 10:35:59.407833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:20.068 [2024-11-29 10:35:59.407846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:20.068 [2024-11-29 10:35:59.407855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.068 [2024-11-29 10:35:59.407877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.068 [2024-11-29 10:35:59.407886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:20.068 [2024-11-29 10:35:59.407895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:20.068 [2024-11-29 10:35:59.407903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.068 [2024-11-29 10:35:59.407966] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.270 ms, result 0 00:30:20.068 true 00:30:20.068 10:35:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:20.330 { 00:30:20.330 "name": "ftl", 00:30:20.330 "properties": [ 00:30:20.330 { 00:30:20.330 "name": "superblock_version", 00:30:20.330 "value": 5, 00:30:20.330 "read-only": true 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "name": "base_device", 00:30:20.330 "bands": [ 00:30:20.330 { 00:30:20.330 "id": 0, 00:30:20.330 "state": "CLOSED", 00:30:20.330 "validity": 1.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 1, 00:30:20.330 "state": "CLOSED", 00:30:20.330 "validity": 1.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 2, 00:30:20.330 "state": "CLOSED", 00:30:20.330 "validity": 0.007843137254901933 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 3, 00:30:20.330 "state": "FREE", 00:30:20.330 "validity": 0.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 4, 00:30:20.330 "state": "FREE", 00:30:20.330 "validity": 0.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 5, 00:30:20.330 "state": "FREE", 00:30:20.330 "validity": 0.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 6, 00:30:20.330 "state": "FREE", 00:30:20.330 "validity": 0.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 7, 00:30:20.330 "state": "FREE", 00:30:20.330 "validity": 0.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 8, 00:30:20.330 "state": "FREE", 00:30:20.330 "validity": 0.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 9, 00:30:20.330 "state": "FREE", 00:30:20.330 "validity": 0.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 10, 00:30:20.330 "state": "FREE", 00:30:20.330 "validity": 0.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 11, 00:30:20.330 "state": "FREE", 00:30:20.330 "validity": 0.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 12, 00:30:20.330 "state": "FREE", 00:30:20.330 "validity": 0.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 13, 00:30:20.330 "state": "FREE", 00:30:20.330 "validity": 0.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 14, 00:30:20.330 "state": "FREE", 00:30:20.330 "validity": 0.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 15, 00:30:20.330 "state": "FREE", 00:30:20.330 "validity": 0.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 16, 00:30:20.330 "state": "FREE", 00:30:20.330 "validity": 0.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 17, 00:30:20.330 "state": "FREE", 00:30:20.330 "validity": 0.0 00:30:20.330 } 00:30:20.330 ], 00:30:20.330 "read-only": true 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "name": "cache_device", 00:30:20.330 "type": "bdev", 00:30:20.330 "chunks": [ 00:30:20.330 { 00:30:20.330 "id": 0, 00:30:20.330 "state": "INACTIVE", 00:30:20.330 "utilization": 0.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.330 "id": 1, 00:30:20.330 "state": "OPEN", 00:30:20.330 "utilization": 0.0 00:30:20.330 }, 00:30:20.330 { 00:30:20.331 "id": 2, 00:30:20.331 "state": "OPEN", 00:30:20.331 "utilization": 0.0 00:30:20.331 }, 00:30:20.331 { 00:30:20.331 "id": 3, 00:30:20.331 "state": "FREE", 00:30:20.331 "utilization": 0.0 00:30:20.331 }, 00:30:20.331 { 00:30:20.331 "id": 4, 00:30:20.331 "state": "FREE", 00:30:20.331 "utilization": 0.0 00:30:20.331 } 00:30:20.331 ], 00:30:20.331 "read-only": true 00:30:20.331 }, 00:30:20.331 { 00:30:20.331 "name": "verbose_mode", 00:30:20.331 "value": true, 00:30:20.331 "unit": "", 00:30:20.331 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:20.331 }, 00:30:20.331 { 00:30:20.331 "name": "prep_upgrade_on_shutdown", 00:30:20.331 "value": false, 00:30:20.331 "unit": "", 00:30:20.331 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:20.331 } 00:30:20.331 ] 00:30:20.331 } 00:30:20.331 10:35:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:20.331 10:35:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:20.331 10:35:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:20.592 10:35:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:20.592 10:35:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:20.592 10:35:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:20.592 10:35:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:20.592 10:35:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:20.853 Validate MD5 checksum, iteration 1 00:30:20.853 10:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:20.853 10:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:20.853 10:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:20.853 10:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:20.853 10:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:20.853 10:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:20.853 10:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:20.853 10:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:20.853 10:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:20.853 10:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:20.854 10:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:20.854 10:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:20.854 10:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:20.854 [2024-11-29 10:36:00.138382] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:30:20.854 [2024-11-29 10:36:00.138522] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94491 ] 00:30:20.854 [2024-11-29 10:36:00.287794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.115 [2024-11-29 10:36:00.319266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.504  [2024-11-29T10:36:02.910Z] Copying: 529/1024 [MB] (529 MBps) [2024-11-29T10:36:06.215Z] Copying: 1024/1024 [MB] (average 530 MBps) 00:30:26.750 00:30:26.750 10:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:26.750 10:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:28.662 10:36:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:28.662 Validate MD5 checksum, iteration 2 00:30:28.662 10:36:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fc1653ef2188a221f7ecf03037594574 00:30:28.662 10:36:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fc1653ef2188a221f7ecf03037594574 != \f\c\1\6\5\3\e\f\2\1\8\8\a\2\2\1\f\7\e\c\f\0\3\0\3\7\5\9\4\5\7\4 ]] 00:30:28.662 10:36:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:28.662 10:36:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:28.662 10:36:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:28.662 10:36:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:28.662 10:36:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:28.662 10:36:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:28.662 10:36:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:28.662 10:36:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:28.662 10:36:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:28.662 [2024-11-29 10:36:07.922687] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:30:28.662 [2024-11-29 10:36:07.922946] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94569 ] 00:30:28.662 [2024-11-29 10:36:08.065883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.662 [2024-11-29 10:36:08.083979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.046  [2024-11-29T10:36:10.083Z] Copying: 672/1024 [MB] (672 MBps) [2024-11-29T10:36:10.655Z] Copying: 1024/1024 [MB] (average 634 MBps) 00:30:31.190 00:30:31.190 10:36:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:31.190 10:36:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:33.733 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:33.733 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=697dd705f4ae9c8c9397e11e6690096b 00:30:33.733 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 697dd705f4ae9c8c9397e11e6690096b != \6\9\7\d\d\7\0\5\f\4\a\e\9\c\8\c\9\3\9\7\e\1\1\e\6\6\9\0\0\9\6\b ]] 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 94421 ]] 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 94421 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94625 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94625 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94625 ']' 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:33.734 10:36:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:33.734 [2024-11-29 10:36:12.863756] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:30:33.734 [2024-11-29 10:36:12.863883] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94625 ] 00:30:33.734 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 94421 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:33.734 [2024-11-29 10:36:13.006433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.734 [2024-11-29 10:36:13.025509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.995 [2024-11-29 10:36:13.273619] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:33.995 [2024-11-29 10:36:13.273676] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:33.995 [2024-11-29 10:36:13.411150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.995 [2024-11-29 10:36:13.411186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:33.995 [2024-11-29 10:36:13.411198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:33.995 [2024-11-29 10:36:13.411204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.995 [2024-11-29 10:36:13.411240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.995 [2024-11-29 10:36:13.411249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:33.995 [2024-11-29 10:36:13.411255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:33.995 [2024-11-29 10:36:13.411261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.995 [2024-11-29 10:36:13.411274] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:33.995 [2024-11-29 10:36:13.411668] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:33.995 [2024-11-29 10:36:13.411701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.995 [2024-11-29 10:36:13.411711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:33.995 [2024-11-29 10:36:13.411719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.430 ms 00:30:33.995 [2024-11-29 10:36:13.411724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.995 [2024-11-29 10:36:13.411956] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:33.995 [2024-11-29 10:36:13.415057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.995 [2024-11-29 10:36:13.415089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:33.995 [2024-11-29 10:36:13.415097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.102 ms 00:30:33.995 [2024-11-29 10:36:13.415103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.995 [2024-11-29 10:36:13.415835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.995 [2024-11-29 10:36:13.415863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:33.995 [2024-11-29 10:36:13.415871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:30:33.995 [2024-11-29 10:36:13.415877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.995 [2024-11-29 10:36:13.416085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.995 [2024-11-29 10:36:13.416104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:33.995 [2024-11-29 10:36:13.416111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.162 ms 00:30:33.995 [2024-11-29 10:36:13.416117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.995 [2024-11-29 10:36:13.416143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.995 [2024-11-29 10:36:13.416149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:33.995 [2024-11-29 10:36:13.416155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:33.995 [2024-11-29 10:36:13.416160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.995 [2024-11-29 10:36:13.416179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.995 [2024-11-29 10:36:13.416188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:33.995 [2024-11-29 10:36:13.416194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:33.995 [2024-11-29 10:36:13.416201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.995 [2024-11-29 10:36:13.416215] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:33.995 [2024-11-29 10:36:13.416873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.995 [2024-11-29 10:36:13.416895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:33.995 [2024-11-29 10:36:13.416902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.661 ms 00:30:33.995 [2024-11-29 10:36:13.416907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.995 [2024-11-29 10:36:13.416928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.995 [2024-11-29 10:36:13.416936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:33.995 [2024-11-29 10:36:13.416942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:33.995 [2024-11-29 10:36:13.416947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.995 [2024-11-29 10:36:13.416962] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:33.995 [2024-11-29 10:36:13.416977] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:33.995 [2024-11-29 10:36:13.417003] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:33.995 [2024-11-29 10:36:13.417013] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:33.995 [2024-11-29 10:36:13.417094] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:33.995 [2024-11-29 10:36:13.417102] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:33.995 [2024-11-29 10:36:13.417110] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:33.995 [2024-11-29 10:36:13.417117] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:33.995 [2024-11-29 10:36:13.417124] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:33.995 [2024-11-29 10:36:13.417130] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:33.995 [2024-11-29 10:36:13.417138] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:33.995 [2024-11-29 10:36:13.417144] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:33.995 [2024-11-29 10:36:13.417151] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:33.995 [2024-11-29 10:36:13.417159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.995 [2024-11-29 10:36:13.417166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:33.995 [2024-11-29 10:36:13.417172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:30:33.995 [2024-11-29 10:36:13.417177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.995 [2024-11-29 10:36:13.417241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.995 [2024-11-29 10:36:13.417247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:33.995 [2024-11-29 10:36:13.417252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:30:33.995 [2024-11-29 10:36:13.417258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.995 [2024-11-29 10:36:13.417334] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:33.995 [2024-11-29 10:36:13.417340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:33.995 [2024-11-29 10:36:13.417346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:33.995 [2024-11-29 10:36:13.417354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.995 [2024-11-29 10:36:13.417359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:33.996 [2024-11-29 10:36:13.417364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:33.996 [2024-11-29 10:36:13.417369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:33.996 [2024-11-29 10:36:13.417374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:33.996 [2024-11-29 10:36:13.417379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:33.996 [2024-11-29 10:36:13.417384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.996 [2024-11-29 10:36:13.417389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:33.996 [2024-11-29 10:36:13.417394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:33.996 [2024-11-29 10:36:13.417399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.996 [2024-11-29 10:36:13.417404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:33.996 [2024-11-29 10:36:13.417409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:33.996 [2024-11-29 10:36:13.417419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.996 [2024-11-29 10:36:13.417425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:33.996 [2024-11-29 10:36:13.417430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:33.996 [2024-11-29 10:36:13.417434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.996 [2024-11-29 10:36:13.417439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:33.996 [2024-11-29 10:36:13.417445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:33.996 [2024-11-29 10:36:13.417450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:33.996 [2024-11-29 10:36:13.417455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:33.996 [2024-11-29 10:36:13.417460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:33.996 [2024-11-29 10:36:13.417464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:33.996 [2024-11-29 10:36:13.417469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:33.996 [2024-11-29 10:36:13.417474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:33.996 [2024-11-29 10:36:13.417479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:33.996 [2024-11-29 10:36:13.417484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:33.996 [2024-11-29 10:36:13.417489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:33.996 [2024-11-29 10:36:13.417493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:33.996 [2024-11-29 10:36:13.417500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:33.996 [2024-11-29 10:36:13.417505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:33.996 [2024-11-29 10:36:13.417509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.996 [2024-11-29 10:36:13.417514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:33.996 [2024-11-29 10:36:13.417519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:33.996 [2024-11-29 10:36:13.417524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.996 [2024-11-29 10:36:13.417530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:33.996 [2024-11-29 10:36:13.417536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:33.996 [2024-11-29 10:36:13.417541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.996 [2024-11-29 10:36:13.417547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:33.996 [2024-11-29 10:36:13.417552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:33.996 [2024-11-29 10:36:13.417557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.996 [2024-11-29 10:36:13.417563] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:33.996 [2024-11-29 10:36:13.417574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:33.996 [2024-11-29 10:36:13.417580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:33.996 [2024-11-29 10:36:13.417586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.996 [2024-11-29 10:36:13.417596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:33.996 [2024-11-29 10:36:13.417602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:33.996 [2024-11-29 10:36:13.417608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:33.996 [2024-11-29 10:36:13.417613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:33.996 [2024-11-29 10:36:13.417619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:33.996 [2024-11-29 10:36:13.417624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:33.996 [2024-11-29 10:36:13.417631] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:33.996 [2024-11-29 10:36:13.417641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:33.996 [2024-11-29 10:36:13.417648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:33.996 [2024-11-29 10:36:13.417654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:33.996 [2024-11-29 10:36:13.417660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:33.996 [2024-11-29 10:36:13.417666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:33.996 [2024-11-29 10:36:13.417672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:33.996 [2024-11-29 10:36:13.417678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:33.996 [2024-11-29 10:36:13.417684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:33.996 [2024-11-29 10:36:13.417690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:33.996 [2024-11-29 10:36:13.417698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:33.996 [2024-11-29 10:36:13.417704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:33.996 [2024-11-29 10:36:13.417710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:33.996 [2024-11-29 10:36:13.417716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:33.996 [2024-11-29 10:36:13.417721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:33.996 [2024-11-29 10:36:13.417728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:33.996 [2024-11-29 10:36:13.417734] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:33.996 [2024-11-29 10:36:13.417741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:33.996 [2024-11-29 10:36:13.417748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:33.996 [2024-11-29 10:36:13.417758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:33.996 [2024-11-29 10:36:13.417764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:33.996 [2024-11-29 10:36:13.417771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:33.996 [2024-11-29 10:36:13.417777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.996 [2024-11-29 10:36:13.417787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:33.996 [2024-11-29 10:36:13.417793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.496 ms 00:30:33.996 [2024-11-29 10:36:13.417809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.996 [2024-11-29 10:36:13.423575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.996 [2024-11-29 10:36:13.423599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:33.996 [2024-11-29 10:36:13.423609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.726 ms 00:30:33.996 [2024-11-29 10:36:13.423616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.996 [2024-11-29 10:36:13.423640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.996 [2024-11-29 10:36:13.423648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:33.996 [2024-11-29 10:36:13.423656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:33.997 [2024-11-29 10:36:13.423662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.997 [2024-11-29 10:36:13.431085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.997 [2024-11-29 10:36:13.431112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:33.997 [2024-11-29 10:36:13.431120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.385 ms 00:30:33.997 [2024-11-29 10:36:13.431126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.997 [2024-11-29 10:36:13.431146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.997 [2024-11-29 10:36:13.431155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:33.997 [2024-11-29 10:36:13.431162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:33.997 [2024-11-29 10:36:13.431170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.997 [2024-11-29 10:36:13.431224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.997 [2024-11-29 10:36:13.431232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:33.997 [2024-11-29 10:36:13.431238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:33.997 [2024-11-29 10:36:13.431246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.997 [2024-11-29 10:36:13.431280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.997 [2024-11-29 10:36:13.431286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:33.997 [2024-11-29 10:36:13.431292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:33.997 [2024-11-29 10:36:13.431299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.997 [2024-11-29 10:36:13.436050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.997 [2024-11-29 10:36:13.436075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:33.997 [2024-11-29 10:36:13.436084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.734 ms 00:30:33.997 [2024-11-29 10:36:13.436094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.997 [2024-11-29 10:36:13.436168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.997 [2024-11-29 10:36:13.436176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:33.997 [2024-11-29 10:36:13.436185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:33.997 [2024-11-29 10:36:13.436190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.997 [2024-11-29 10:36:13.446610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.997 [2024-11-29 10:36:13.446643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:33.997 [2024-11-29 10:36:13.446659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.405 ms 00:30:33.997 [2024-11-29 10:36:13.446666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.997 [2024-11-29 10:36:13.447624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.997 [2024-11-29 10:36:13.447650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:33.997 [2024-11-29 10:36:13.447659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.205 ms 00:30:33.997 [2024-11-29 10:36:13.447665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.257 [2024-11-29 10:36:13.459875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.257 [2024-11-29 10:36:13.459912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:34.257 [2024-11-29 10:36:13.459921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.177 ms 00:30:34.257 [2024-11-29 10:36:13.459927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.257 [2024-11-29 10:36:13.460025] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:34.257 [2024-11-29 10:36:13.460095] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:34.257 [2024-11-29 10:36:13.460159] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:34.257 [2024-11-29 10:36:13.460226] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:34.257 [2024-11-29 10:36:13.460238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.257 [2024-11-29 10:36:13.460245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:34.257 [2024-11-29 10:36:13.460254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.279 ms 00:30:34.257 [2024-11-29 10:36:13.460260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.257 [2024-11-29 10:36:13.460303] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:34.257 [2024-11-29 10:36:13.460312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.257 [2024-11-29 10:36:13.460318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:34.257 [2024-11-29 10:36:13.460325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:34.257 [2024-11-29 10:36:13.460330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.257 [2024-11-29 10:36:13.462147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.257 [2024-11-29 10:36:13.462177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:34.257 [2024-11-29 10:36:13.462184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.800 ms 00:30:34.257 [2024-11-29 10:36:13.462192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.257 [2024-11-29 10:36:13.462672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.257 [2024-11-29 10:36:13.462696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:34.257 [2024-11-29 10:36:13.462704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:34.257 [2024-11-29 10:36:13.462710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.258 [2024-11-29 10:36:13.462757] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:34.258 [2024-11-29 10:36:13.462890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.258 [2024-11-29 10:36:13.462904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:34.258 [2024-11-29 10:36:13.462913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.135 ms 00:30:34.258 [2024-11-29 10:36:13.462920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.830 [2024-11-29 10:36:14.157383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.830 [2024-11-29 10:36:14.157434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:34.830 [2024-11-29 10:36:14.157448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 694.222 ms 00:30:34.830 [2024-11-29 10:36:14.157456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.830 [2024-11-29 10:36:14.159213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.830 [2024-11-29 10:36:14.159249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:34.830 [2024-11-29 10:36:14.159259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.464 ms 00:30:34.830 [2024-11-29 10:36:14.159272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.830 [2024-11-29 10:36:14.160196] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:34.830 [2024-11-29 10:36:14.160227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.830 [2024-11-29 10:36:14.160237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:34.830 [2024-11-29 10:36:14.160247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.925 ms 00:30:34.830 [2024-11-29 10:36:14.160256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.830 [2024-11-29 10:36:14.160286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.830 [2024-11-29 10:36:14.160300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:34.830 [2024-11-29 10:36:14.160314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:34.830 [2024-11-29 10:36:14.160323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.830 [2024-11-29 10:36:14.160358] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 697.596 ms, result 0 00:30:34.830 [2024-11-29 10:36:14.160403] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:34.830 [2024-11-29 10:36:14.160461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.830 [2024-11-29 10:36:14.160471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:34.830 [2024-11-29 10:36:14.160481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:30:34.830 [2024-11-29 10:36:14.160489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.402 [2024-11-29 10:36:14.797259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.402 [2024-11-29 10:36:14.797318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:35.402 [2024-11-29 10:36:14.797330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 636.405 ms 00:30:35.402 [2024-11-29 10:36:14.797338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.402 [2024-11-29 10:36:14.798952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.402 [2024-11-29 10:36:14.798982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:35.402 [2024-11-29 10:36:14.798991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.314 ms 00:30:35.402 [2024-11-29 10:36:14.798998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.402 [2024-11-29 10:36:14.800008] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:35.402 [2024-11-29 10:36:14.800039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.402 [2024-11-29 10:36:14.800047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:35.402 [2024-11-29 10:36:14.800056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.015 ms 00:30:35.402 [2024-11-29 10:36:14.800064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.402 [2024-11-29 10:36:14.800092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.402 [2024-11-29 10:36:14.800102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:35.402 [2024-11-29 10:36:14.800111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:35.402 [2024-11-29 10:36:14.800118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.402 [2024-11-29 10:36:14.800154] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 639.751 ms, result 0 00:30:35.402 [2024-11-29 10:36:14.800195] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:35.402 [2024-11-29 10:36:14.800206] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:35.402 [2024-11-29 10:36:14.800216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.402 [2024-11-29 10:36:14.800225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:35.402 [2024-11-29 10:36:14.800240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1337.475 ms 00:30:35.402 [2024-11-29 10:36:14.800251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.402 [2024-11-29 10:36:14.800279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.402 [2024-11-29 10:36:14.800291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:35.402 [2024-11-29 10:36:14.800302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:35.402 [2024-11-29 10:36:14.800309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.402 [2024-11-29 10:36:14.807984] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:35.403 [2024-11-29 10:36:14.808081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.403 [2024-11-29 10:36:14.808094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:35.403 [2024-11-29 10:36:14.808107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.758 ms 00:30:35.403 [2024-11-29 10:36:14.808115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.403 [2024-11-29 10:36:14.808789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.403 [2024-11-29 10:36:14.808819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:35.403 [2024-11-29 10:36:14.808829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.598 ms 00:30:35.403 [2024-11-29 10:36:14.808836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.403 [2024-11-29 10:36:14.811092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.403 [2024-11-29 10:36:14.811116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:35.403 [2024-11-29 10:36:14.811130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.241 ms 00:30:35.403 [2024-11-29 10:36:14.811139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.403 [2024-11-29 10:36:14.811176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.403 [2024-11-29 10:36:14.811185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:35.403 [2024-11-29 10:36:14.811194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:35.403 [2024-11-29 10:36:14.811201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.403 [2024-11-29 10:36:14.811302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.403 [2024-11-29 10:36:14.811312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:35.403 [2024-11-29 10:36:14.811321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:35.403 [2024-11-29 10:36:14.811328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.403 [2024-11-29 10:36:14.811347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.403 [2024-11-29 10:36:14.811355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:35.403 [2024-11-29 10:36:14.811363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:35.403 [2024-11-29 10:36:14.811372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.403 [2024-11-29 10:36:14.811397] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:35.403 [2024-11-29 10:36:14.811406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.403 [2024-11-29 10:36:14.811420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:35.403 [2024-11-29 10:36:14.811428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:35.403 [2024-11-29 10:36:14.811437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.403 [2024-11-29 10:36:14.811488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.403 [2024-11-29 10:36:14.811497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:35.403 [2024-11-29 10:36:14.811504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:30:35.403 [2024-11-29 10:36:14.811511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.403 [2024-11-29 10:36:14.812344] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1400.792 ms, result 0 00:30:35.403 [2024-11-29 10:36:14.824648] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.403 [2024-11-29 10:36:14.840641] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:35.403 [2024-11-29 10:36:14.848726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:35.975 10:36:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.975 10:36:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:35.975 10:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:35.975 10:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:35.975 10:36:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:35.975 10:36:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:35.975 10:36:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:35.975 10:36:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:35.975 Validate MD5 checksum, iteration 1 00:30:35.975 10:36:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:35.975 10:36:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:35.975 10:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:35.975 10:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:35.975 10:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:35.975 10:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:35.975 10:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:36.234 [2024-11-29 10:36:15.468163] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:30:36.234 [2024-11-29 10:36:15.468278] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94665 ] 00:30:36.234 [2024-11-29 10:36:15.621225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.234 [2024-11-29 10:36:15.639189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.617  [2024-11-29T10:36:17.650Z] Copying: 723/1024 [MB] (723 MBps) [2024-11-29T10:36:18.221Z] Copying: 1024/1024 [MB] (average 705 MBps) 00:30:38.757 00:30:38.757 10:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:38.757 10:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:40.666 10:36:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:40.666 10:36:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fc1653ef2188a221f7ecf03037594574 00:30:40.666 10:36:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fc1653ef2188a221f7ecf03037594574 != \f\c\1\6\5\3\e\f\2\1\8\8\a\2\2\1\f\7\e\c\f\0\3\0\3\7\5\9\4\5\7\4 ]] 00:30:40.666 10:36:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:40.666 10:36:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:40.666 Validate MD5 checksum, iteration 2 00:30:40.666 10:36:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:40.666 10:36:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:40.666 10:36:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:40.666 10:36:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:40.666 10:36:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:40.666 10:36:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:40.666 10:36:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:40.925 [2024-11-29 10:36:20.156991] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:30:40.925 [2024-11-29 10:36:20.157077] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94714 ] 00:30:40.925 [2024-11-29 10:36:20.300409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.925 [2024-11-29 10:36:20.318322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.303  [2024-11-29T10:36:22.341Z] Copying: 693/1024 [MB] (693 MBps) [2024-11-29T10:36:22.915Z] Copying: 1024/1024 [MB] (average 684 MBps) 00:30:43.450 00:30:43.450 10:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:43.450 10:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=697dd705f4ae9c8c9397e11e6690096b 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 697dd705f4ae9c8c9397e11e6690096b != \6\9\7\d\d\7\0\5\f\4\a\e\9\c\8\c\9\3\9\7\e\1\1\e\6\6\9\0\0\9\6\b ]] 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94625 ]] 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94625 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94625 ']' 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94625 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94625 00:30:45.357 killing process with pid 94625 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94625' 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94625 00:30:45.357 10:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94625 00:30:45.357 [2024-11-29 10:36:24.775501] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:45.357 [2024-11-29 10:36:24.780110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.357 [2024-11-29 10:36:24.780140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:45.357 [2024-11-29 10:36:24.780150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:45.357 [2024-11-29 10:36:24.780157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.357 [2024-11-29 10:36:24.780174] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:45.357 [2024-11-29 10:36:24.780539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.357 [2024-11-29 10:36:24.780558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:45.357 [2024-11-29 10:36:24.780568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.356 ms 00:30:45.357 [2024-11-29 10:36:24.780574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.357 [2024-11-29 10:36:24.780753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.357 [2024-11-29 10:36:24.780765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:45.357 [2024-11-29 10:36:24.780772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.163 ms 00:30:45.357 [2024-11-29 10:36:24.780778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.357 [2024-11-29 10:36:24.781825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.357 [2024-11-29 10:36:24.781842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:45.357 [2024-11-29 10:36:24.781849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.035 ms 00:30:45.357 [2024-11-29 10:36:24.781872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.357 [2024-11-29 10:36:24.782719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.357 [2024-11-29 10:36:24.782732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:45.357 [2024-11-29 10:36:24.782739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.822 ms 00:30:45.357 [2024-11-29 10:36:24.782746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.357 [2024-11-29 10:36:24.784044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.357 [2024-11-29 10:36:24.784070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:45.357 [2024-11-29 10:36:24.784081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.272 ms 00:30:45.357 [2024-11-29 10:36:24.784087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.357 [2024-11-29 10:36:24.785345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.357 [2024-11-29 10:36:24.785371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:45.357 [2024-11-29 10:36:24.785378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.202 ms 00:30:45.357 [2024-11-29 10:36:24.785384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.357 [2024-11-29 10:36:24.785487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.357 [2024-11-29 10:36:24.785495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:45.357 [2024-11-29 10:36:24.785502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:30:45.357 [2024-11-29 10:36:24.785511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.357 [2024-11-29 10:36:24.786580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.357 [2024-11-29 10:36:24.786612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:45.357 [2024-11-29 10:36:24.786619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.056 ms 00:30:45.357 [2024-11-29 10:36:24.786625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.357 [2024-11-29 10:36:24.787911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.357 [2024-11-29 10:36:24.787935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:45.357 [2024-11-29 10:36:24.787942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.175 ms 00:30:45.358 [2024-11-29 10:36:24.787947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.788782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.358 [2024-11-29 10:36:24.788819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:45.358 [2024-11-29 10:36:24.788827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.745 ms 00:30:45.358 [2024-11-29 10:36:24.788833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.789881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.358 [2024-11-29 10:36:24.789906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:45.358 [2024-11-29 10:36:24.789914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.978 ms 00:30:45.358 [2024-11-29 10:36:24.789919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.789963] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:45.358 [2024-11-29 10:36:24.789975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:45.358 [2024-11-29 10:36:24.789983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:45.358 [2024-11-29 10:36:24.789990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:45.358 [2024-11-29 10:36:24.789996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:45.358 [2024-11-29 10:36:24.790002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:45.358 [2024-11-29 10:36:24.790008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:45.358 [2024-11-29 10:36:24.790014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:45.358 [2024-11-29 10:36:24.790020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:45.358 [2024-11-29 10:36:24.790026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:45.358 [2024-11-29 10:36:24.790032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:45.358 [2024-11-29 10:36:24.790038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:45.358 [2024-11-29 10:36:24.790044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:45.358 [2024-11-29 10:36:24.790049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:45.358 [2024-11-29 10:36:24.790055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:45.358 [2024-11-29 10:36:24.790061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:45.358 [2024-11-29 10:36:24.790067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:45.358 [2024-11-29 10:36:24.790072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:45.358 [2024-11-29 10:36:24.790078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:45.358 [2024-11-29 10:36:24.790085] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:45.358 [2024-11-29 10:36:24.790090] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 1c8053d1-ca8b-4651-9e70-c19e5eabfd4e 00:30:45.358 [2024-11-29 10:36:24.790097] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:45.358 [2024-11-29 10:36:24.790102] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:45.358 [2024-11-29 10:36:24.790108] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:45.358 [2024-11-29 10:36:24.790114] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:45.358 [2024-11-29 10:36:24.790119] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:45.358 [2024-11-29 10:36:24.790125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:45.358 [2024-11-29 10:36:24.790134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:45.358 [2024-11-29 10:36:24.790139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:45.358 [2024-11-29 10:36:24.790144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:45.358 [2024-11-29 10:36:24.790150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.358 [2024-11-29 10:36:24.790156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:45.358 [2024-11-29 10:36:24.790162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:30:45.358 [2024-11-29 10:36:24.790168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.791332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.358 [2024-11-29 10:36:24.791349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:45.358 [2024-11-29 10:36:24.791356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.152 ms 00:30:45.358 [2024-11-29 10:36:24.791362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.791432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.358 [2024-11-29 10:36:24.791443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:45.358 [2024-11-29 10:36:24.791449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:30:45.358 [2024-11-29 10:36:24.791455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.795969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.358 [2024-11-29 10:36:24.796002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:45.358 [2024-11-29 10:36:24.796010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.358 [2024-11-29 10:36:24.796018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.796042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.358 [2024-11-29 10:36:24.796048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:45.358 [2024-11-29 10:36:24.796054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.358 [2024-11-29 10:36:24.796059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.796113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.358 [2024-11-29 10:36:24.796124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:45.358 [2024-11-29 10:36:24.796131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.358 [2024-11-29 10:36:24.796136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.796155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.358 [2024-11-29 10:36:24.796162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:45.358 [2024-11-29 10:36:24.796168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.358 [2024-11-29 10:36:24.796174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.804010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.358 [2024-11-29 10:36:24.804042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:45.358 [2024-11-29 10:36:24.804050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.358 [2024-11-29 10:36:24.804057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.810130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.358 [2024-11-29 10:36:24.810160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:45.358 [2024-11-29 10:36:24.810169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.358 [2024-11-29 10:36:24.810175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.810213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.358 [2024-11-29 10:36:24.810220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:45.358 [2024-11-29 10:36:24.810227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.358 [2024-11-29 10:36:24.810232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.810275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.358 [2024-11-29 10:36:24.810285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:45.358 [2024-11-29 10:36:24.810291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.358 [2024-11-29 10:36:24.810297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.810345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.358 [2024-11-29 10:36:24.810352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:45.358 [2024-11-29 10:36:24.810358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.358 [2024-11-29 10:36:24.810363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.810386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.358 [2024-11-29 10:36:24.810393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:45.358 [2024-11-29 10:36:24.810401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.358 [2024-11-29 10:36:24.810407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.810437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.358 [2024-11-29 10:36:24.810444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:45.358 [2024-11-29 10:36:24.810450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.358 [2024-11-29 10:36:24.810455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.810491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.358 [2024-11-29 10:36:24.810500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:45.358 [2024-11-29 10:36:24.810506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.358 [2024-11-29 10:36:24.810512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.358 [2024-11-29 10:36:24.810602] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 30.473 ms, result 0 00:30:45.618 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:45.618 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:45.618 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:45.618 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:45.618 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:45.618 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:45.618 Remove shared memory files 00:30:45.618 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:45.618 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:45.618 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:45.618 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:45.618 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid94421 00:30:45.618 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:45.618 10:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:45.618 00:30:45.618 real 1m16.124s 00:30:45.618 user 1m42.854s 00:30:45.618 sys 0m19.429s 00:30:45.618 10:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:45.618 ************************************ 00:30:45.618 END TEST ftl_upgrade_shutdown 00:30:45.618 ************************************ 00:30:45.618 10:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:45.618 10:36:25 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:30:45.618 10:36:25 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:45.618 10:36:25 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:45.618 10:36:25 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:45.618 10:36:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:45.618 ************************************ 00:30:45.618 START TEST ftl_restore_fast 00:30:45.618 ************************************ 00:30:45.618 10:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:45.876 * Looking for test storage... 00:30:45.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:45.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.876 --rc genhtml_branch_coverage=1 00:30:45.876 --rc genhtml_function_coverage=1 00:30:45.876 --rc genhtml_legend=1 00:30:45.876 --rc geninfo_all_blocks=1 00:30:45.876 --rc geninfo_unexecuted_blocks=1 00:30:45.876 00:30:45.876 ' 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:45.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.876 --rc genhtml_branch_coverage=1 00:30:45.876 --rc genhtml_function_coverage=1 00:30:45.876 --rc genhtml_legend=1 00:30:45.876 --rc geninfo_all_blocks=1 00:30:45.876 --rc geninfo_unexecuted_blocks=1 00:30:45.876 00:30:45.876 ' 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:45.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.876 --rc genhtml_branch_coverage=1 00:30:45.876 --rc genhtml_function_coverage=1 00:30:45.876 --rc genhtml_legend=1 00:30:45.876 --rc geninfo_all_blocks=1 00:30:45.876 --rc geninfo_unexecuted_blocks=1 00:30:45.876 00:30:45.876 ' 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:45.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.876 --rc genhtml_branch_coverage=1 00:30:45.876 --rc genhtml_function_coverage=1 00:30:45.876 --rc genhtml_legend=1 00:30:45.876 --rc geninfo_all_blocks=1 00:30:45.876 --rc geninfo_unexecuted_blocks=1 00:30:45.876 00:30:45.876 ' 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:45.876 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.XYeMQHoIQs 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=94846 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 94846 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 94846 ']' 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:45.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.877 10:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:45.877 [2024-11-29 10:36:25.258529] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:30:45.877 [2024-11-29 10:36:25.258623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94846 ] 00:30:46.133 [2024-11-29 10:36:25.393266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.133 [2024-11-29 10:36:25.409610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.697 10:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:46.698 10:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:30:46.698 10:36:26 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:46.698 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:30:46.698 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:46.698 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:30:46.698 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:30:46.698 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:46.956 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:46.956 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:30:46.956 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:46.956 10:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:46.956 10:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:46.956 10:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:46.956 10:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:46.956 10:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:47.214 10:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:47.214 { 00:30:47.214 "name": "nvme0n1", 00:30:47.214 "aliases": [ 00:30:47.214 "07bec2a8-d90c-4e0e-96a0-833601cf12b2" 00:30:47.214 ], 00:30:47.214 "product_name": "NVMe disk", 00:30:47.214 "block_size": 4096, 00:30:47.214 "num_blocks": 1310720, 00:30:47.214 "uuid": "07bec2a8-d90c-4e0e-96a0-833601cf12b2", 00:30:47.214 "numa_id": -1, 00:30:47.214 "assigned_rate_limits": { 00:30:47.214 "rw_ios_per_sec": 0, 00:30:47.214 "rw_mbytes_per_sec": 0, 00:30:47.214 "r_mbytes_per_sec": 0, 00:30:47.214 "w_mbytes_per_sec": 0 00:30:47.214 }, 00:30:47.214 "claimed": true, 00:30:47.214 "claim_type": "read_many_write_one", 00:30:47.214 "zoned": false, 00:30:47.214 "supported_io_types": { 00:30:47.214 "read": true, 00:30:47.214 "write": true, 00:30:47.214 "unmap": true, 00:30:47.214 "flush": true, 00:30:47.214 "reset": true, 00:30:47.214 "nvme_admin": true, 00:30:47.214 "nvme_io": true, 00:30:47.214 "nvme_io_md": false, 00:30:47.215 "write_zeroes": true, 00:30:47.215 "zcopy": false, 00:30:47.215 "get_zone_info": false, 00:30:47.215 "zone_management": false, 00:30:47.215 "zone_append": false, 00:30:47.215 "compare": true, 00:30:47.215 "compare_and_write": false, 00:30:47.215 "abort": true, 00:30:47.215 "seek_hole": false, 00:30:47.215 "seek_data": false, 00:30:47.215 "copy": true, 00:30:47.215 "nvme_iov_md": false 00:30:47.215 }, 00:30:47.215 "driver_specific": { 00:30:47.215 "nvme": [ 00:30:47.215 { 00:30:47.215 "pci_address": "0000:00:11.0", 00:30:47.215 "trid": { 00:30:47.215 "trtype": "PCIe", 00:30:47.215 "traddr": "0000:00:11.0" 00:30:47.215 }, 00:30:47.215 "ctrlr_data": { 00:30:47.215 "cntlid": 0, 00:30:47.215 "vendor_id": "0x1b36", 00:30:47.215 "model_number": "QEMU NVMe Ctrl", 00:30:47.215 "serial_number": "12341", 00:30:47.215 "firmware_revision": "8.0.0", 00:30:47.215 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:47.215 "oacs": { 00:30:47.215 "security": 0, 00:30:47.215 "format": 1, 00:30:47.215 "firmware": 0, 00:30:47.215 "ns_manage": 1 00:30:47.215 }, 00:30:47.215 "multi_ctrlr": false, 00:30:47.215 "ana_reporting": false 00:30:47.215 }, 00:30:47.215 "vs": { 00:30:47.215 "nvme_version": "1.4" 00:30:47.215 }, 00:30:47.215 "ns_data": { 00:30:47.215 "id": 1, 00:30:47.215 "can_share": false 00:30:47.215 } 00:30:47.215 } 00:30:47.215 ], 00:30:47.215 "mp_policy": "active_passive" 00:30:47.215 } 00:30:47.215 } 00:30:47.215 ]' 00:30:47.215 10:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:47.215 10:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:47.215 10:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:47.215 10:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:47.215 10:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:47.215 10:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:30:47.215 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:30:47.215 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:47.215 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:30:47.215 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:47.215 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:47.473 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=be860034-09bc-4064-ab0c-a3f5274d1004 00:30:47.473 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:30:47.473 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u be860034-09bc-4064-ab0c-a3f5274d1004 00:30:47.732 10:36:26 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:47.732 10:36:27 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=6f5ea66b-2de9-43f4-9752-6c691b00f3c2 00:30:47.733 10:36:27 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6f5ea66b-2de9-43f4-9752-6c691b00f3c2 00:30:47.992 10:36:27 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d 00:30:47.992 10:36:27 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:47.992 10:36:27 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d 00:30:47.992 10:36:27 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:30:47.992 10:36:27 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:47.992 10:36:27 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d 00:30:47.992 10:36:27 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:30:47.992 10:36:27 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d 00:30:47.992 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d 00:30:47.992 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:47.992 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:47.992 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:47.992 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d 00:30:48.250 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:48.250 { 00:30:48.250 "name": "1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d", 00:30:48.250 "aliases": [ 00:30:48.250 "lvs/nvme0n1p0" 00:30:48.250 ], 00:30:48.250 "product_name": "Logical Volume", 00:30:48.250 "block_size": 4096, 00:30:48.250 "num_blocks": 26476544, 00:30:48.251 "uuid": "1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d", 00:30:48.251 "assigned_rate_limits": { 00:30:48.251 "rw_ios_per_sec": 0, 00:30:48.251 "rw_mbytes_per_sec": 0, 00:30:48.251 "r_mbytes_per_sec": 0, 00:30:48.251 "w_mbytes_per_sec": 0 00:30:48.251 }, 00:30:48.251 "claimed": false, 00:30:48.251 "zoned": false, 00:30:48.251 "supported_io_types": { 00:30:48.251 "read": true, 00:30:48.251 "write": true, 00:30:48.251 "unmap": true, 00:30:48.251 "flush": false, 00:30:48.251 "reset": true, 00:30:48.251 "nvme_admin": false, 00:30:48.251 "nvme_io": false, 00:30:48.251 "nvme_io_md": false, 00:30:48.251 "write_zeroes": true, 00:30:48.251 "zcopy": false, 00:30:48.251 "get_zone_info": false, 00:30:48.251 "zone_management": false, 00:30:48.251 "zone_append": false, 00:30:48.251 "compare": false, 00:30:48.251 "compare_and_write": false, 00:30:48.251 "abort": false, 00:30:48.251 "seek_hole": true, 00:30:48.251 "seek_data": true, 00:30:48.251 "copy": false, 00:30:48.251 "nvme_iov_md": false 00:30:48.251 }, 00:30:48.251 "driver_specific": { 00:30:48.251 "lvol": { 00:30:48.251 "lvol_store_uuid": "6f5ea66b-2de9-43f4-9752-6c691b00f3c2", 00:30:48.251 "base_bdev": "nvme0n1", 00:30:48.251 "thin_provision": true, 00:30:48.251 "num_allocated_clusters": 0, 00:30:48.251 "snapshot": false, 00:30:48.251 "clone": false, 00:30:48.251 "esnap_clone": false 00:30:48.251 } 00:30:48.251 } 00:30:48.251 } 00:30:48.251 ]' 00:30:48.251 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:48.251 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:48.251 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:48.251 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:48.251 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:48.251 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:48.251 10:36:27 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:30:48.251 10:36:27 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:30:48.251 10:36:27 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:48.510 10:36:27 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:48.510 10:36:27 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:48.510 10:36:27 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d 00:30:48.510 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d 00:30:48.510 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:48.510 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:48.510 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:48.510 10:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d 00:30:48.768 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:48.768 { 00:30:48.768 "name": "1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d", 00:30:48.768 "aliases": [ 00:30:48.768 "lvs/nvme0n1p0" 00:30:48.768 ], 00:30:48.768 "product_name": "Logical Volume", 00:30:48.768 "block_size": 4096, 00:30:48.768 "num_blocks": 26476544, 00:30:48.768 "uuid": "1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d", 00:30:48.768 "assigned_rate_limits": { 00:30:48.768 "rw_ios_per_sec": 0, 00:30:48.768 "rw_mbytes_per_sec": 0, 00:30:48.768 "r_mbytes_per_sec": 0, 00:30:48.768 "w_mbytes_per_sec": 0 00:30:48.768 }, 00:30:48.768 "claimed": false, 00:30:48.768 "zoned": false, 00:30:48.768 "supported_io_types": { 00:30:48.768 "read": true, 00:30:48.768 "write": true, 00:30:48.768 "unmap": true, 00:30:48.768 "flush": false, 00:30:48.768 "reset": true, 00:30:48.768 "nvme_admin": false, 00:30:48.768 "nvme_io": false, 00:30:48.768 "nvme_io_md": false, 00:30:48.768 "write_zeroes": true, 00:30:48.768 "zcopy": false, 00:30:48.768 "get_zone_info": false, 00:30:48.768 "zone_management": false, 00:30:48.768 "zone_append": false, 00:30:48.768 "compare": false, 00:30:48.768 "compare_and_write": false, 00:30:48.768 "abort": false, 00:30:48.768 "seek_hole": true, 00:30:48.768 "seek_data": true, 00:30:48.768 "copy": false, 00:30:48.768 "nvme_iov_md": false 00:30:48.768 }, 00:30:48.768 "driver_specific": { 00:30:48.768 "lvol": { 00:30:48.768 "lvol_store_uuid": "6f5ea66b-2de9-43f4-9752-6c691b00f3c2", 00:30:48.768 "base_bdev": "nvme0n1", 00:30:48.768 "thin_provision": true, 00:30:48.768 "num_allocated_clusters": 0, 00:30:48.768 "snapshot": false, 00:30:48.768 "clone": false, 00:30:48.768 "esnap_clone": false 00:30:48.768 } 00:30:48.768 } 00:30:48.768 } 00:30:48.768 ]' 00:30:48.768 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:48.768 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:48.768 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:48.768 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:48.768 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:48.768 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:48.768 10:36:28 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:30:48.768 10:36:28 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:49.027 10:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:30:49.027 10:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d 00:30:49.027 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d 00:30:49.027 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:49.027 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:49.027 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:49.027 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d 00:30:49.286 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:49.286 { 00:30:49.286 "name": "1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d", 00:30:49.286 "aliases": [ 00:30:49.286 "lvs/nvme0n1p0" 00:30:49.287 ], 00:30:49.287 "product_name": "Logical Volume", 00:30:49.287 "block_size": 4096, 00:30:49.287 "num_blocks": 26476544, 00:30:49.287 "uuid": "1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d", 00:30:49.287 "assigned_rate_limits": { 00:30:49.287 "rw_ios_per_sec": 0, 00:30:49.287 "rw_mbytes_per_sec": 0, 00:30:49.287 "r_mbytes_per_sec": 0, 00:30:49.287 "w_mbytes_per_sec": 0 00:30:49.287 }, 00:30:49.287 "claimed": false, 00:30:49.287 "zoned": false, 00:30:49.287 "supported_io_types": { 00:30:49.287 "read": true, 00:30:49.287 "write": true, 00:30:49.287 "unmap": true, 00:30:49.287 "flush": false, 00:30:49.287 "reset": true, 00:30:49.287 "nvme_admin": false, 00:30:49.287 "nvme_io": false, 00:30:49.287 "nvme_io_md": false, 00:30:49.287 "write_zeroes": true, 00:30:49.287 "zcopy": false, 00:30:49.287 "get_zone_info": false, 00:30:49.287 "zone_management": false, 00:30:49.287 "zone_append": false, 00:30:49.287 "compare": false, 00:30:49.287 "compare_and_write": false, 00:30:49.287 "abort": false, 00:30:49.287 "seek_hole": true, 00:30:49.287 "seek_data": true, 00:30:49.287 "copy": false, 00:30:49.287 "nvme_iov_md": false 00:30:49.287 }, 00:30:49.287 "driver_specific": { 00:30:49.287 "lvol": { 00:30:49.287 "lvol_store_uuid": "6f5ea66b-2de9-43f4-9752-6c691b00f3c2", 00:30:49.287 "base_bdev": "nvme0n1", 00:30:49.287 "thin_provision": true, 00:30:49.287 "num_allocated_clusters": 0, 00:30:49.287 "snapshot": false, 00:30:49.287 "clone": false, 00:30:49.287 "esnap_clone": false 00:30:49.287 } 00:30:49.287 } 00:30:49.287 } 00:30:49.287 ]' 00:30:49.287 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:49.287 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:49.287 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:49.287 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:49.287 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:49.287 10:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:49.287 10:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:30:49.287 10:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d --l2p_dram_limit 10' 00:30:49.287 10:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:30:49.287 10:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:30:49.287 10:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:49.287 10:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:30:49.287 10:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:30:49.287 10:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1f1064fc-a0ba-4722-9d2b-d40e9cd65f7d --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:30:49.287 [2024-11-29 10:36:28.707933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.287 [2024-11-29 10:36:28.707975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:49.287 [2024-11-29 10:36:28.707986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:49.287 [2024-11-29 10:36:28.707994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.287 [2024-11-29 10:36:28.708037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.287 [2024-11-29 10:36:28.708048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:49.287 [2024-11-29 10:36:28.708054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:30:49.287 [2024-11-29 10:36:28.708062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.287 [2024-11-29 10:36:28.708077] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:49.287 [2024-11-29 10:36:28.708286] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:49.287 [2024-11-29 10:36:28.708297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.287 [2024-11-29 10:36:28.708304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:49.287 [2024-11-29 10:36:28.708311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:30:49.287 [2024-11-29 10:36:28.708320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.287 [2024-11-29 10:36:28.708367] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5e481aca-6577-46d9-82fb-5f3ca05f74f2 00:30:49.287 [2024-11-29 10:36:28.709314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.287 [2024-11-29 10:36:28.709340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:49.287 [2024-11-29 10:36:28.709352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:30:49.287 [2024-11-29 10:36:28.709358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.287 [2024-11-29 10:36:28.714196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.287 [2024-11-29 10:36:28.714223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:49.287 [2024-11-29 10:36:28.714235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.799 ms 00:30:49.287 [2024-11-29 10:36:28.714243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.287 [2024-11-29 10:36:28.714305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.287 [2024-11-29 10:36:28.714312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:49.287 [2024-11-29 10:36:28.714319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:49.287 [2024-11-29 10:36:28.714325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.287 [2024-11-29 10:36:28.714365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.287 [2024-11-29 10:36:28.714375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:49.287 [2024-11-29 10:36:28.714382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:49.287 [2024-11-29 10:36:28.714388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.287 [2024-11-29 10:36:28.714406] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:49.287 [2024-11-29 10:36:28.715669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.287 [2024-11-29 10:36:28.715693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:49.287 [2024-11-29 10:36:28.715700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 00:30:49.287 [2024-11-29 10:36:28.715707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.287 [2024-11-29 10:36:28.715736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.287 [2024-11-29 10:36:28.715744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:49.287 [2024-11-29 10:36:28.715750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:49.287 [2024-11-29 10:36:28.715758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.287 [2024-11-29 10:36:28.715771] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:49.287 [2024-11-29 10:36:28.715897] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:49.287 [2024-11-29 10:36:28.715907] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:49.287 [2024-11-29 10:36:28.715917] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:49.287 [2024-11-29 10:36:28.715927] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:49.287 [2024-11-29 10:36:28.715937] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:49.287 [2024-11-29 10:36:28.715944] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:49.287 [2024-11-29 10:36:28.715951] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:49.287 [2024-11-29 10:36:28.715957] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:49.287 [2024-11-29 10:36:28.715964] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:49.287 [2024-11-29 10:36:28.715969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.287 [2024-11-29 10:36:28.715977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:49.287 [2024-11-29 10:36:28.715985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:30:49.287 [2024-11-29 10:36:28.715993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.287 [2024-11-29 10:36:28.716060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.287 [2024-11-29 10:36:28.716069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:49.287 [2024-11-29 10:36:28.716076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:49.287 [2024-11-29 10:36:28.716083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.287 [2024-11-29 10:36:28.716156] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:49.287 [2024-11-29 10:36:28.716164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:49.287 [2024-11-29 10:36:28.716170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:49.287 [2024-11-29 10:36:28.716177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.287 [2024-11-29 10:36:28.716183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:49.287 [2024-11-29 10:36:28.716189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:49.287 [2024-11-29 10:36:28.716195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:49.287 [2024-11-29 10:36:28.716202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:49.288 [2024-11-29 10:36:28.716211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:49.288 [2024-11-29 10:36:28.716217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:49.288 [2024-11-29 10:36:28.716222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:49.288 [2024-11-29 10:36:28.716229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:49.288 [2024-11-29 10:36:28.716234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:49.288 [2024-11-29 10:36:28.716243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:49.288 [2024-11-29 10:36:28.716248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:49.288 [2024-11-29 10:36:28.716254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.288 [2024-11-29 10:36:28.716259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:49.288 [2024-11-29 10:36:28.716265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:49.288 [2024-11-29 10:36:28.716270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.288 [2024-11-29 10:36:28.716277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:49.288 [2024-11-29 10:36:28.716282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:49.288 [2024-11-29 10:36:28.716288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:49.288 [2024-11-29 10:36:28.716293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:49.288 [2024-11-29 10:36:28.716299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:49.288 [2024-11-29 10:36:28.716311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:49.288 [2024-11-29 10:36:28.716318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:49.288 [2024-11-29 10:36:28.716324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:49.288 [2024-11-29 10:36:28.716331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:49.288 [2024-11-29 10:36:28.716337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:49.288 [2024-11-29 10:36:28.716346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:49.288 [2024-11-29 10:36:28.716351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:49.288 [2024-11-29 10:36:28.716358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:49.288 [2024-11-29 10:36:28.716365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:49.288 [2024-11-29 10:36:28.716373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:49.288 [2024-11-29 10:36:28.716378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:49.288 [2024-11-29 10:36:28.716386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:49.288 [2024-11-29 10:36:28.716392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:49.288 [2024-11-29 10:36:28.716400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:49.288 [2024-11-29 10:36:28.716406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:49.288 [2024-11-29 10:36:28.716413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.288 [2024-11-29 10:36:28.716421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:49.288 [2024-11-29 10:36:28.716429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:49.288 [2024-11-29 10:36:28.716435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.288 [2024-11-29 10:36:28.716442] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:49.288 [2024-11-29 10:36:28.716453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:49.288 [2024-11-29 10:36:28.716462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:49.288 [2024-11-29 10:36:28.716468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.288 [2024-11-29 10:36:28.716480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:49.288 [2024-11-29 10:36:28.716486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:49.288 [2024-11-29 10:36:28.716494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:49.288 [2024-11-29 10:36:28.716499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:49.288 [2024-11-29 10:36:28.716506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:49.288 [2024-11-29 10:36:28.716512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:49.288 [2024-11-29 10:36:28.716524] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:49.288 [2024-11-29 10:36:28.716531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:49.288 [2024-11-29 10:36:28.716540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:49.288 [2024-11-29 10:36:28.716546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:49.288 [2024-11-29 10:36:28.716554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:49.288 [2024-11-29 10:36:28.716560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:49.288 [2024-11-29 10:36:28.716568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:49.288 [2024-11-29 10:36:28.716574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:49.288 [2024-11-29 10:36:28.716583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:49.288 [2024-11-29 10:36:28.716589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:49.288 [2024-11-29 10:36:28.716597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:49.288 [2024-11-29 10:36:28.716603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:49.288 [2024-11-29 10:36:28.716610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:49.288 [2024-11-29 10:36:28.716616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:49.288 [2024-11-29 10:36:28.716624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:49.288 [2024-11-29 10:36:28.716631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:49.288 [2024-11-29 10:36:28.716638] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:49.288 [2024-11-29 10:36:28.716645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:49.288 [2024-11-29 10:36:28.716653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:49.288 [2024-11-29 10:36:28.716661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:49.288 [2024-11-29 10:36:28.716669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:49.288 [2024-11-29 10:36:28.716675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:49.288 [2024-11-29 10:36:28.716683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.288 [2024-11-29 10:36:28.716689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:49.288 [2024-11-29 10:36:28.716699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:30:49.288 [2024-11-29 10:36:28.716705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.288 [2024-11-29 10:36:28.716738] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:49.288 [2024-11-29 10:36:28.716748] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:51.817 [2024-11-29 10:36:30.826627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.826692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:51.817 [2024-11-29 10:36:30.826710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2109.876 ms 00:30:51.817 [2024-11-29 10:36:30.826720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.837571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.837618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:51.817 [2024-11-29 10:36:30.837633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.752 ms 00:30:51.817 [2024-11-29 10:36:30.837644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.837750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.837760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:51.817 [2024-11-29 10:36:30.837772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:51.817 [2024-11-29 10:36:30.837781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.848317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.848353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:51.817 [2024-11-29 10:36:30.848368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.471 ms 00:30:51.817 [2024-11-29 10:36:30.848376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.848408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.848417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:51.817 [2024-11-29 10:36:30.848427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:51.817 [2024-11-29 10:36:30.848435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.848890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.848908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:51.817 [2024-11-29 10:36:30.848920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:30:51.817 [2024-11-29 10:36:30.848931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.849048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.849063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:51.817 [2024-11-29 10:36:30.849074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:30:51.817 [2024-11-29 10:36:30.849083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.855967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.855997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:51.817 [2024-11-29 10:36:30.856008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.863 ms 00:30:51.817 [2024-11-29 10:36:30.856016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.872878] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:51.817 [2024-11-29 10:36:30.876553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.876785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:51.817 [2024-11-29 10:36:30.876824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.471 ms 00:30:51.817 [2024-11-29 10:36:30.876838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.924089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.924142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:51.817 [2024-11-29 10:36:30.924153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.208 ms 00:30:51.817 [2024-11-29 10:36:30.924167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.924356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.924370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:51.817 [2024-11-29 10:36:30.924379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:30:51.817 [2024-11-29 10:36:30.924389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.927333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.927372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:51.817 [2024-11-29 10:36:30.927382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.925 ms 00:30:51.817 [2024-11-29 10:36:30.927392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.929632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.929667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:51.817 [2024-11-29 10:36:30.929677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.207 ms 00:30:51.817 [2024-11-29 10:36:30.929686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.930020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.930033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:51.817 [2024-11-29 10:36:30.930042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:30:51.817 [2024-11-29 10:36:30.930054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.959404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.959445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:51.817 [2024-11-29 10:36:30.959455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.332 ms 00:30:51.817 [2024-11-29 10:36:30.959465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.963644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.963680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:51.817 [2024-11-29 10:36:30.963695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.135 ms 00:30:51.817 [2024-11-29 10:36:30.963705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.966610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.966752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:51.817 [2024-11-29 10:36:30.966767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.871 ms 00:30:51.817 [2024-11-29 10:36:30.966777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.970246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.970282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:51.817 [2024-11-29 10:36:30.970291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.417 ms 00:30:51.817 [2024-11-29 10:36:30.970302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.970339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.970351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:51.817 [2024-11-29 10:36:30.970361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:51.817 [2024-11-29 10:36:30.970370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.970437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.817 [2024-11-29 10:36:30.970449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:51.817 [2024-11-29 10:36:30.970459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:51.817 [2024-11-29 10:36:30.970470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.817 [2024-11-29 10:36:30.971491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2263.068 ms, result 0 00:30:51.817 { 00:30:51.817 "name": "ftl0", 00:30:51.817 "uuid": "5e481aca-6577-46d9-82fb-5f3ca05f74f2" 00:30:51.817 } 00:30:51.817 10:36:30 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:51.817 10:36:30 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:51.817 10:36:31 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:30:51.817 10:36:31 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:52.077 [2024-11-29 10:36:31.383563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.077 [2024-11-29 10:36:31.383615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:52.077 [2024-11-29 10:36:31.383630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:52.077 [2024-11-29 10:36:31.383640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.077 [2024-11-29 10:36:31.383665] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:52.077 [2024-11-29 10:36:31.384234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.077 [2024-11-29 10:36:31.384256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:52.077 [2024-11-29 10:36:31.384265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:30:52.077 [2024-11-29 10:36:31.384277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.077 [2024-11-29 10:36:31.384532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.077 [2024-11-29 10:36:31.384549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:52.077 [2024-11-29 10:36:31.384559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:30:52.077 [2024-11-29 10:36:31.384570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.077 [2024-11-29 10:36:31.387824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.077 [2024-11-29 10:36:31.387848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:52.077 [2024-11-29 10:36:31.387857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.238 ms 00:30:52.077 [2024-11-29 10:36:31.387867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.077 [2024-11-29 10:36:31.394144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.077 [2024-11-29 10:36:31.394174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:52.077 [2024-11-29 10:36:31.394186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.261 ms 00:30:52.077 [2024-11-29 10:36:31.394196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.077 [2024-11-29 10:36:31.395786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.077 [2024-11-29 10:36:31.395838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:52.077 [2024-11-29 10:36:31.395847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.511 ms 00:30:52.078 [2024-11-29 10:36:31.395857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.078 [2024-11-29 10:36:31.400254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.078 [2024-11-29 10:36:31.400288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:52.078 [2024-11-29 10:36:31.400298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.363 ms 00:30:52.078 [2024-11-29 10:36:31.400308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.078 [2024-11-29 10:36:31.400424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.078 [2024-11-29 10:36:31.400442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:52.078 [2024-11-29 10:36:31.400451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:30:52.078 [2024-11-29 10:36:31.400460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.078 [2024-11-29 10:36:31.401933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.078 [2024-11-29 10:36:31.401966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:52.078 [2024-11-29 10:36:31.401975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.457 ms 00:30:52.078 [2024-11-29 10:36:31.401983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.078 [2024-11-29 10:36:31.403038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.078 [2024-11-29 10:36:31.403073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:52.078 [2024-11-29 10:36:31.403083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:30:52.078 [2024-11-29 10:36:31.403093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.078 [2024-11-29 10:36:31.404209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.078 [2024-11-29 10:36:31.404241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:52.078 [2024-11-29 10:36:31.404250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:30:52.078 [2024-11-29 10:36:31.404259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.078 [2024-11-29 10:36:31.405430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.078 [2024-11-29 10:36:31.405579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:52.078 [2024-11-29 10:36:31.405594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.106 ms 00:30:52.078 [2024-11-29 10:36:31.405604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.078 [2024-11-29 10:36:31.405634] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:52.078 [2024-11-29 10:36:31.405652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.405999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:52.078 [2024-11-29 10:36:31.406287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:52.079 [2024-11-29 10:36:31.406578] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:52.079 [2024-11-29 10:36:31.406586] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e481aca-6577-46d9-82fb-5f3ca05f74f2 00:30:52.079 [2024-11-29 10:36:31.406596] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:52.079 [2024-11-29 10:36:31.406604] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:52.079 [2024-11-29 10:36:31.406612] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:52.079 [2024-11-29 10:36:31.406622] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:52.079 [2024-11-29 10:36:31.406631] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:52.079 [2024-11-29 10:36:31.406640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:52.079 [2024-11-29 10:36:31.406649] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:52.079 [2024-11-29 10:36:31.406655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:52.079 [2024-11-29 10:36:31.406663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:52.079 [2024-11-29 10:36:31.406670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.079 [2024-11-29 10:36:31.406679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:52.079 [2024-11-29 10:36:31.406687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:30:52.079 [2024-11-29 10:36:31.406696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.079 [2024-11-29 10:36:31.408232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.079 [2024-11-29 10:36:31.408256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:52.079 [2024-11-29 10:36:31.408266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.519 ms 00:30:52.079 [2024-11-29 10:36:31.408277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.079 [2024-11-29 10:36:31.408387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.079 [2024-11-29 10:36:31.408398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:52.079 [2024-11-29 10:36:31.408407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:30:52.079 [2024-11-29 10:36:31.408415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.079 [2024-11-29 10:36:31.414949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.079 [2024-11-29 10:36:31.415070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:52.079 [2024-11-29 10:36:31.415123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.079 [2024-11-29 10:36:31.415168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.079 [2024-11-29 10:36:31.415241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.079 [2024-11-29 10:36:31.415329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:52.079 [2024-11-29 10:36:31.415412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.079 [2024-11-29 10:36:31.415438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.079 [2024-11-29 10:36:31.415543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.079 [2024-11-29 10:36:31.415613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:52.079 [2024-11-29 10:36:31.415659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.079 [2024-11-29 10:36:31.415685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.079 [2024-11-29 10:36:31.415735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.079 [2024-11-29 10:36:31.415761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:52.079 [2024-11-29 10:36:31.415782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.079 [2024-11-29 10:36:31.415871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.079 [2024-11-29 10:36:31.427724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.079 [2024-11-29 10:36:31.427889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:52.079 [2024-11-29 10:36:31.427958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.079 [2024-11-29 10:36:31.428067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.079 [2024-11-29 10:36:31.437537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.079 [2024-11-29 10:36:31.437664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:52.079 [2024-11-29 10:36:31.437718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.079 [2024-11-29 10:36:31.437743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.079 [2024-11-29 10:36:31.437879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.079 [2024-11-29 10:36:31.438007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:52.079 [2024-11-29 10:36:31.438162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.079 [2024-11-29 10:36:31.438192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.079 [2024-11-29 10:36:31.438260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.079 [2024-11-29 10:36:31.438290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:52.079 [2024-11-29 10:36:31.438376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.079 [2024-11-29 10:36:31.438402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.079 [2024-11-29 10:36:31.438496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.079 [2024-11-29 10:36:31.438529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:52.079 [2024-11-29 10:36:31.438625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.079 [2024-11-29 10:36:31.438647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.079 [2024-11-29 10:36:31.438732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.079 [2024-11-29 10:36:31.438767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:52.079 [2024-11-29 10:36:31.438788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.079 [2024-11-29 10:36:31.438922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.079 [2024-11-29 10:36:31.438985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.079 [2024-11-29 10:36:31.439194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:52.079 [2024-11-29 10:36:31.439263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.079 [2024-11-29 10:36:31.439290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.079 [2024-11-29 10:36:31.439354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.079 [2024-11-29 10:36:31.439382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:52.079 [2024-11-29 10:36:31.439402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.079 [2024-11-29 10:36:31.439455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.079 [2024-11-29 10:36:31.439615] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.009 ms, result 0 00:30:52.079 true 00:30:52.080 10:36:31 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 94846 00:30:52.080 10:36:31 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94846 ']' 00:30:52.080 10:36:31 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94846 00:30:52.080 10:36:31 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:30:52.080 10:36:31 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:52.080 10:36:31 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94846 00:30:52.080 killing process with pid 94846 00:30:52.080 10:36:31 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:52.080 10:36:31 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:52.080 10:36:31 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94846' 00:30:52.080 10:36:31 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 94846 00:30:52.080 10:36:31 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 94846 00:30:58.636 10:36:37 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:31:01.918 262144+0 records in 00:31:01.918 262144+0 records out 00:31:01.918 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.73451 s, 288 MB/s 00:31:01.918 10:36:41 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:03.820 10:36:43 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:03.820 [2024-11-29 10:36:43.282933] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:31:03.820 [2024-11-29 10:36:43.283024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95036 ] 00:31:04.078 [2024-11-29 10:36:43.425006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.078 [2024-11-29 10:36:43.442848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.078 [2024-11-29 10:36:43.527444] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:04.078 [2024-11-29 10:36:43.527509] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:04.338 [2024-11-29 10:36:43.680386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.338 [2024-11-29 10:36:43.680428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:04.338 [2024-11-29 10:36:43.680441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:04.338 [2024-11-29 10:36:43.680449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.338 [2024-11-29 10:36:43.680490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.338 [2024-11-29 10:36:43.680500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:04.338 [2024-11-29 10:36:43.680508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:04.338 [2024-11-29 10:36:43.680520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.338 [2024-11-29 10:36:43.680540] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:04.338 [2024-11-29 10:36:43.680763] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:04.338 [2024-11-29 10:36:43.680779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.338 [2024-11-29 10:36:43.680791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:04.338 [2024-11-29 10:36:43.680818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:31:04.338 [2024-11-29 10:36:43.680829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.338 [2024-11-29 10:36:43.681823] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:04.338 [2024-11-29 10:36:43.683745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.338 [2024-11-29 10:36:43.683897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:04.338 [2024-11-29 10:36:43.683914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.925 ms 00:31:04.338 [2024-11-29 10:36:43.683930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.338 [2024-11-29 10:36:43.683978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.338 [2024-11-29 10:36:43.683987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:04.338 [2024-11-29 10:36:43.683997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:04.338 [2024-11-29 10:36:43.684005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.338 [2024-11-29 10:36:43.688445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.338 [2024-11-29 10:36:43.688474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:04.338 [2024-11-29 10:36:43.688486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.397 ms 00:31:04.338 [2024-11-29 10:36:43.688493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.338 [2024-11-29 10:36:43.688570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.338 [2024-11-29 10:36:43.688579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:04.338 [2024-11-29 10:36:43.688587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:31:04.338 [2024-11-29 10:36:43.688594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.338 [2024-11-29 10:36:43.688631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.338 [2024-11-29 10:36:43.688640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:04.338 [2024-11-29 10:36:43.688648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:04.338 [2024-11-29 10:36:43.688656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.338 [2024-11-29 10:36:43.688676] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:04.338 [2024-11-29 10:36:43.689954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.338 [2024-11-29 10:36:43.689978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:04.338 [2024-11-29 10:36:43.689987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:31:04.338 [2024-11-29 10:36:43.689999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.338 [2024-11-29 10:36:43.690029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.338 [2024-11-29 10:36:43.690040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:04.338 [2024-11-29 10:36:43.690047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:04.338 [2024-11-29 10:36:43.690057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.338 [2024-11-29 10:36:43.690083] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:04.338 [2024-11-29 10:36:43.690102] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:04.339 [2024-11-29 10:36:43.690139] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:04.339 [2024-11-29 10:36:43.690157] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:04.339 [2024-11-29 10:36:43.690255] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:04.339 [2024-11-29 10:36:43.690266] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:04.339 [2024-11-29 10:36:43.690278] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:04.339 [2024-11-29 10:36:43.690288] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:04.339 [2024-11-29 10:36:43.690296] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:04.339 [2024-11-29 10:36:43.690304] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:04.339 [2024-11-29 10:36:43.690311] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:04.339 [2024-11-29 10:36:43.690318] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:04.339 [2024-11-29 10:36:43.690324] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:04.339 [2024-11-29 10:36:43.690335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.339 [2024-11-29 10:36:43.690346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:04.339 [2024-11-29 10:36:43.690353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:31:04.339 [2024-11-29 10:36:43.690360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.339 [2024-11-29 10:36:43.690445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.339 [2024-11-29 10:36:43.690453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:04.339 [2024-11-29 10:36:43.690460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:04.339 [2024-11-29 10:36:43.690467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.339 [2024-11-29 10:36:43.690571] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:04.339 [2024-11-29 10:36:43.690583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:04.339 [2024-11-29 10:36:43.690594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:04.339 [2024-11-29 10:36:43.690601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:04.339 [2024-11-29 10:36:43.690609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:04.339 [2024-11-29 10:36:43.690615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:04.339 [2024-11-29 10:36:43.690622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:04.339 [2024-11-29 10:36:43.690629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:04.339 [2024-11-29 10:36:43.690636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:04.339 [2024-11-29 10:36:43.690642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:04.339 [2024-11-29 10:36:43.690650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:04.339 [2024-11-29 10:36:43.690658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:04.339 [2024-11-29 10:36:43.690664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:04.339 [2024-11-29 10:36:43.690671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:04.339 [2024-11-29 10:36:43.690678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:04.339 [2024-11-29 10:36:43.690685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:04.339 [2024-11-29 10:36:43.690691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:04.339 [2024-11-29 10:36:43.690698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:04.339 [2024-11-29 10:36:43.690704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:04.339 [2024-11-29 10:36:43.690710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:04.339 [2024-11-29 10:36:43.690717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:04.339 [2024-11-29 10:36:43.690724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:04.339 [2024-11-29 10:36:43.690730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:04.339 [2024-11-29 10:36:43.690736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:04.339 [2024-11-29 10:36:43.690742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:04.339 [2024-11-29 10:36:43.690749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:04.339 [2024-11-29 10:36:43.690755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:04.339 [2024-11-29 10:36:43.690766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:04.339 [2024-11-29 10:36:43.690772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:04.339 [2024-11-29 10:36:43.690778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:04.339 [2024-11-29 10:36:43.690784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:04.339 [2024-11-29 10:36:43.690791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:04.339 [2024-11-29 10:36:43.690797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:04.339 [2024-11-29 10:36:43.690839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:04.339 [2024-11-29 10:36:43.690846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:04.339 [2024-11-29 10:36:43.690852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:04.339 [2024-11-29 10:36:43.690859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:04.339 [2024-11-29 10:36:43.690865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:04.339 [2024-11-29 10:36:43.690872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:04.339 [2024-11-29 10:36:43.690878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:04.339 [2024-11-29 10:36:43.690885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:04.339 [2024-11-29 10:36:43.690891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:04.339 [2024-11-29 10:36:43.690897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:04.339 [2024-11-29 10:36:43.690906] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:04.339 [2024-11-29 10:36:43.690915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:04.339 [2024-11-29 10:36:43.690922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:04.339 [2024-11-29 10:36:43.690929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:04.339 [2024-11-29 10:36:43.690937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:04.339 [2024-11-29 10:36:43.690944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:04.339 [2024-11-29 10:36:43.690951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:04.339 [2024-11-29 10:36:43.690957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:04.339 [2024-11-29 10:36:43.690963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:04.339 [2024-11-29 10:36:43.690969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:04.339 [2024-11-29 10:36:43.690977] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:04.339 [2024-11-29 10:36:43.690986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:04.339 [2024-11-29 10:36:43.690994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:04.339 [2024-11-29 10:36:43.691001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:04.339 [2024-11-29 10:36:43.691008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:04.339 [2024-11-29 10:36:43.691015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:04.339 [2024-11-29 10:36:43.691023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:04.339 [2024-11-29 10:36:43.691030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:04.339 [2024-11-29 10:36:43.691037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:04.339 [2024-11-29 10:36:43.691044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:04.339 [2024-11-29 10:36:43.691051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:04.339 [2024-11-29 10:36:43.691062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:04.339 [2024-11-29 10:36:43.691069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:04.339 [2024-11-29 10:36:43.691076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:04.339 [2024-11-29 10:36:43.691083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:04.339 [2024-11-29 10:36:43.691090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:04.339 [2024-11-29 10:36:43.691097] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:04.339 [2024-11-29 10:36:43.691104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:04.339 [2024-11-29 10:36:43.691112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:04.339 [2024-11-29 10:36:43.691119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:04.339 [2024-11-29 10:36:43.691126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:04.339 [2024-11-29 10:36:43.691132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:04.339 [2024-11-29 10:36:43.691141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.339 [2024-11-29 10:36:43.691151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:04.340 [2024-11-29 10:36:43.691158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:31:04.340 [2024-11-29 10:36:43.691166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.699241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.699373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:04.340 [2024-11-29 10:36:43.699392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.029 ms 00:31:04.340 [2024-11-29 10:36:43.699400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.699489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.699497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:04.340 [2024-11-29 10:36:43.699509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:04.340 [2024-11-29 10:36:43.699516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.715256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.715293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:04.340 [2024-11-29 10:36:43.715305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.700 ms 00:31:04.340 [2024-11-29 10:36:43.715312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.715350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.715366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:04.340 [2024-11-29 10:36:43.715374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:04.340 [2024-11-29 10:36:43.715381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.715701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.715722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:04.340 [2024-11-29 10:36:43.715731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:31:04.340 [2024-11-29 10:36:43.715738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.715874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.715888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:04.340 [2024-11-29 10:36:43.715901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:31:04.340 [2024-11-29 10:36:43.715909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.720758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.720790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:04.340 [2024-11-29 10:36:43.720822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.830 ms 00:31:04.340 [2024-11-29 10:36:43.720831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.722987] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:04.340 [2024-11-29 10:36:43.723026] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:04.340 [2024-11-29 10:36:43.723040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.723049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:04.340 [2024-11-29 10:36:43.723058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.127 ms 00:31:04.340 [2024-11-29 10:36:43.723065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.738009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.738131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:04.340 [2024-11-29 10:36:43.738150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.907 ms 00:31:04.340 [2024-11-29 10:36:43.738158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.739758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.739788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:04.340 [2024-11-29 10:36:43.739797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.562 ms 00:31:04.340 [2024-11-29 10:36:43.739814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.741126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.741154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:04.340 [2024-11-29 10:36:43.741163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:31:04.340 [2024-11-29 10:36:43.741170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.741503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.741518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:04.340 [2024-11-29 10:36:43.741526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:31:04.340 [2024-11-29 10:36:43.741533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.755480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.755523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:04.340 [2024-11-29 10:36:43.755538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.931 ms 00:31:04.340 [2024-11-29 10:36:43.755545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.762713] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:04.340 [2024-11-29 10:36:43.764903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.764930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:04.340 [2024-11-29 10:36:43.764945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.325 ms 00:31:04.340 [2024-11-29 10:36:43.764953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.764997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.765009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:04.340 [2024-11-29 10:36:43.765019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:04.340 [2024-11-29 10:36:43.765033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.765092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.765103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:04.340 [2024-11-29 10:36:43.765111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:04.340 [2024-11-29 10:36:43.765122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.765142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.765151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:04.340 [2024-11-29 10:36:43.765163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:04.340 [2024-11-29 10:36:43.765171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.765200] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:04.340 [2024-11-29 10:36:43.765211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.765220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:04.340 [2024-11-29 10:36:43.765230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:04.340 [2024-11-29 10:36:43.765238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.768095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.768213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:04.340 [2024-11-29 10:36:43.768230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.837 ms 00:31:04.340 [2024-11-29 10:36:43.768238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.768303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.340 [2024-11-29 10:36:43.768313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:04.340 [2024-11-29 10:36:43.768322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:04.340 [2024-11-29 10:36:43.768330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.340 [2024-11-29 10:36:43.769617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 88.838 ms, result 0 00:31:05.714  [2024-11-29T10:36:46.113Z] Copying: 45/1024 [MB] (45 MBps) [2024-11-29T10:36:47.049Z] Copying: 90/1024 [MB] (45 MBps) [2024-11-29T10:36:47.983Z] Copying: 136/1024 [MB] (45 MBps) [2024-11-29T10:36:48.916Z] Copying: 181/1024 [MB] (45 MBps) [2024-11-29T10:36:49.849Z] Copying: 225/1024 [MB] (43 MBps) [2024-11-29T10:36:50.783Z] Copying: 269/1024 [MB] (44 MBps) [2024-11-29T10:36:52.232Z] Copying: 313/1024 [MB] (44 MBps) [2024-11-29T10:36:52.799Z] Copying: 359/1024 [MB] (45 MBps) [2024-11-29T10:36:54.174Z] Copying: 401/1024 [MB] (42 MBps) [2024-11-29T10:36:55.109Z] Copying: 446/1024 [MB] (44 MBps) [2024-11-29T10:36:56.038Z] Copying: 490/1024 [MB] (44 MBps) [2024-11-29T10:36:56.970Z] Copying: 534/1024 [MB] (43 MBps) [2024-11-29T10:36:57.904Z] Copying: 579/1024 [MB] (45 MBps) [2024-11-29T10:36:58.837Z] Copying: 627/1024 [MB] (47 MBps) [2024-11-29T10:37:00.211Z] Copying: 671/1024 [MB] (43 MBps) [2024-11-29T10:37:01.145Z] Copying: 716/1024 [MB] (45 MBps) [2024-11-29T10:37:02.075Z] Copying: 762/1024 [MB] (46 MBps) [2024-11-29T10:37:03.024Z] Copying: 809/1024 [MB] (46 MBps) [2024-11-29T10:37:03.967Z] Copying: 856/1024 [MB] (47 MBps) [2024-11-29T10:37:04.897Z] Copying: 903/1024 [MB] (46 MBps) [2024-11-29T10:37:05.830Z] Copying: 947/1024 [MB] (44 MBps) [2024-11-29T10:37:06.766Z] Copying: 993/1024 [MB] (46 MBps) [2024-11-29T10:37:06.766Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-11-29 10:37:06.434460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.301 [2024-11-29 10:37:06.434497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:27.301 [2024-11-29 10:37:06.434507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:27.301 [2024-11-29 10:37:06.434519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.301 [2024-11-29 10:37:06.434536] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:27.301 [2024-11-29 10:37:06.434964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.301 [2024-11-29 10:37:06.434981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:27.301 [2024-11-29 10:37:06.434997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:31:27.301 [2024-11-29 10:37:06.435003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.301 [2024-11-29 10:37:06.436378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.301 [2024-11-29 10:37:06.436499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:27.301 [2024-11-29 10:37:06.436512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.359 ms 00:31:27.301 [2024-11-29 10:37:06.436518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.301 [2024-11-29 10:37:06.436547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.301 [2024-11-29 10:37:06.436554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:27.301 [2024-11-29 10:37:06.436561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:27.301 [2024-11-29 10:37:06.436566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.301 [2024-11-29 10:37:06.436600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.301 [2024-11-29 10:37:06.436608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:27.301 [2024-11-29 10:37:06.436614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:27.301 [2024-11-29 10:37:06.436619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.301 [2024-11-29 10:37:06.436629] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:27.301 [2024-11-29 10:37:06.436640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.436997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:27.301 [2024-11-29 10:37:06.437258] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:27.301 [2024-11-29 10:37:06.437264] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e481aca-6577-46d9-82fb-5f3ca05f74f2 00:31:27.301 [2024-11-29 10:37:06.437270] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:27.301 [2024-11-29 10:37:06.437276] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:27.302 [2024-11-29 10:37:06.437281] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:27.302 [2024-11-29 10:37:06.437287] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:27.302 [2024-11-29 10:37:06.437292] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:27.302 [2024-11-29 10:37:06.437302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:27.302 [2024-11-29 10:37:06.437312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:27.302 [2024-11-29 10:37:06.437317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:27.302 [2024-11-29 10:37:06.437322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:27.302 [2024-11-29 10:37:06.437328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.302 [2024-11-29 10:37:06.437334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:27.302 [2024-11-29 10:37:06.437342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:31:27.302 [2024-11-29 10:37:06.437350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.302 [2024-11-29 10:37:06.438553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.302 [2024-11-29 10:37:06.438569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:27.302 [2024-11-29 10:37:06.438576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.192 ms 00:31:27.302 [2024-11-29 10:37:06.438584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.302 [2024-11-29 10:37:06.438652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.302 [2024-11-29 10:37:06.438661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:27.302 [2024-11-29 10:37:06.438667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:31:27.302 [2024-11-29 10:37:06.438673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.302 [2024-11-29 10:37:06.443057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:27.302 [2024-11-29 10:37:06.443150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:27.302 [2024-11-29 10:37:06.443194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:27.302 [2024-11-29 10:37:06.443212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.302 [2024-11-29 10:37:06.443270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:27.302 [2024-11-29 10:37:06.443345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:27.302 [2024-11-29 10:37:06.443363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:27.302 [2024-11-29 10:37:06.443379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.302 [2024-11-29 10:37:06.443422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:27.302 [2024-11-29 10:37:06.443440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:27.302 [2024-11-29 10:37:06.443455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:27.302 [2024-11-29 10:37:06.443473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.302 [2024-11-29 10:37:06.443602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:27.302 [2024-11-29 10:37:06.443621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:27.302 [2024-11-29 10:37:06.443639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:27.302 [2024-11-29 10:37:06.443655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.302 [2024-11-29 10:37:06.451329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:27.302 [2024-11-29 10:37:06.451446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:27.302 [2024-11-29 10:37:06.451486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:27.302 [2024-11-29 10:37:06.451508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.302 [2024-11-29 10:37:06.457495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:27.302 [2024-11-29 10:37:06.457610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:27.302 [2024-11-29 10:37:06.457694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:27.302 [2024-11-29 10:37:06.457712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.302 [2024-11-29 10:37:06.457759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:27.302 [2024-11-29 10:37:06.457777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:27.302 [2024-11-29 10:37:06.457792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:27.302 [2024-11-29 10:37:06.457868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.302 [2024-11-29 10:37:06.457902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:27.302 [2024-11-29 10:37:06.457918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:27.302 [2024-11-29 10:37:06.457934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:27.302 [2024-11-29 10:37:06.457951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.302 [2024-11-29 10:37:06.458009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:27.302 [2024-11-29 10:37:06.458049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:27.302 [2024-11-29 10:37:06.458064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:27.302 [2024-11-29 10:37:06.458078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.302 [2024-11-29 10:37:06.458106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:27.302 [2024-11-29 10:37:06.458123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:27.302 [2024-11-29 10:37:06.458166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:27.302 [2024-11-29 10:37:06.458183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.302 [2024-11-29 10:37:06.458223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:27.302 [2024-11-29 10:37:06.458301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:27.302 [2024-11-29 10:37:06.458323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:27.302 [2024-11-29 10:37:06.458337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.302 [2024-11-29 10:37:06.458381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:27.302 [2024-11-29 10:37:06.458469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:27.302 [2024-11-29 10:37:06.458486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:27.302 [2024-11-29 10:37:06.458506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.302 [2024-11-29 10:37:06.458607] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 24.124 ms, result 0 00:31:28.676 00:31:28.676 00:31:28.676 10:37:07 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:31:28.676 [2024-11-29 10:37:07.800861] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:31:28.676 [2024-11-29 10:37:07.801100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95287 ] 00:31:28.676 [2024-11-29 10:37:07.940558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.676 [2024-11-29 10:37:07.956975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.676 [2024-11-29 10:37:08.038588] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:28.676 [2024-11-29 10:37:08.038815] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:28.936 [2024-11-29 10:37:08.185065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.936 [2024-11-29 10:37:08.185190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:28.936 [2024-11-29 10:37:08.185247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:28.936 [2024-11-29 10:37:08.185271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.936 [2024-11-29 10:37:08.185320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.936 [2024-11-29 10:37:08.185339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:28.936 [2024-11-29 10:37:08.185354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:31:28.936 [2024-11-29 10:37:08.185375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.936 [2024-11-29 10:37:08.185446] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:28.936 [2024-11-29 10:37:08.185639] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:28.936 [2024-11-29 10:37:08.185672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.936 [2024-11-29 10:37:08.185724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:28.936 [2024-11-29 10:37:08.185744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:31:28.936 [2024-11-29 10:37:08.185786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.936 [2024-11-29 10:37:08.186095] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:28.936 [2024-11-29 10:37:08.186161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.936 [2024-11-29 10:37:08.186179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:28.936 [2024-11-29 10:37:08.186195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:31:28.936 [2024-11-29 10:37:08.186213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.936 [2024-11-29 10:37:08.186256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.936 [2024-11-29 10:37:08.186359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:28.936 [2024-11-29 10:37:08.186377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:28.936 [2024-11-29 10:37:08.186391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.936 [2024-11-29 10:37:08.186589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.936 [2024-11-29 10:37:08.186610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:28.936 [2024-11-29 10:37:08.186625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:31:28.936 [2024-11-29 10:37:08.186639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.936 [2024-11-29 10:37:08.186742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.936 [2024-11-29 10:37:08.186795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:28.936 [2024-11-29 10:37:08.186843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:28.936 [2024-11-29 10:37:08.186860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.936 [2024-11-29 10:37:08.186886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.936 [2024-11-29 10:37:08.186901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:28.936 [2024-11-29 10:37:08.186916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:28.936 [2024-11-29 10:37:08.186936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.936 [2024-11-29 10:37:08.186960] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:28.936 [2024-11-29 10:37:08.188284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.936 [2024-11-29 10:37:08.188305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:28.936 [2024-11-29 10:37:08.188312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.328 ms 00:31:28.936 [2024-11-29 10:37:08.188321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.936 [2024-11-29 10:37:08.188343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.936 [2024-11-29 10:37:08.188349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:28.936 [2024-11-29 10:37:08.188358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:28.936 [2024-11-29 10:37:08.188364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.936 [2024-11-29 10:37:08.188384] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:28.936 [2024-11-29 10:37:08.188404] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:28.936 [2024-11-29 10:37:08.188435] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:28.936 [2024-11-29 10:37:08.188451] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:28.936 [2024-11-29 10:37:08.188529] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:28.936 [2024-11-29 10:37:08.188537] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:28.936 [2024-11-29 10:37:08.188545] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:28.936 [2024-11-29 10:37:08.188553] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:28.936 [2024-11-29 10:37:08.188568] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:28.936 [2024-11-29 10:37:08.188574] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:28.936 [2024-11-29 10:37:08.188579] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:28.936 [2024-11-29 10:37:08.188589] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:28.936 [2024-11-29 10:37:08.188597] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:28.936 [2024-11-29 10:37:08.188605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.936 [2024-11-29 10:37:08.188611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:28.936 [2024-11-29 10:37:08.188619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:31:28.936 [2024-11-29 10:37:08.188625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.936 [2024-11-29 10:37:08.188689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.936 [2024-11-29 10:37:08.188696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:28.936 [2024-11-29 10:37:08.188703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:28.936 [2024-11-29 10:37:08.188708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.936 [2024-11-29 10:37:08.188782] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:28.936 [2024-11-29 10:37:08.188789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:28.936 [2024-11-29 10:37:08.188795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:28.936 [2024-11-29 10:37:08.188911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.936 [2024-11-29 10:37:08.188931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:28.936 [2024-11-29 10:37:08.188945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:28.936 [2024-11-29 10:37:08.188958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:28.936 [2024-11-29 10:37:08.188973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:28.936 [2024-11-29 10:37:08.188987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:28.936 [2024-11-29 10:37:08.189029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:28.936 [2024-11-29 10:37:08.189045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:28.936 [2024-11-29 10:37:08.189059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:28.936 [2024-11-29 10:37:08.189072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:28.936 [2024-11-29 10:37:08.189086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:28.936 [2024-11-29 10:37:08.189100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:28.936 [2024-11-29 10:37:08.189115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.936 [2024-11-29 10:37:08.189146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:28.936 [2024-11-29 10:37:08.189195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:28.936 [2024-11-29 10:37:08.189240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.936 [2024-11-29 10:37:08.189259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:28.936 [2024-11-29 10:37:08.189274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:28.936 [2024-11-29 10:37:08.189288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:28.936 [2024-11-29 10:37:08.189302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:28.936 [2024-11-29 10:37:08.189315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:28.936 [2024-11-29 10:37:08.189328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:28.936 [2024-11-29 10:37:08.189342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:28.936 [2024-11-29 10:37:08.189373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:28.936 [2024-11-29 10:37:08.189390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:28.936 [2024-11-29 10:37:08.189404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:28.936 [2024-11-29 10:37:08.189418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:28.936 [2024-11-29 10:37:08.189431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:28.936 [2024-11-29 10:37:08.189444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:28.936 [2024-11-29 10:37:08.189457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:28.936 [2024-11-29 10:37:08.189472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:28.936 [2024-11-29 10:37:08.189502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:28.936 [2024-11-29 10:37:08.189522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:28.936 [2024-11-29 10:37:08.189536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:28.936 [2024-11-29 10:37:08.189571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:28.936 [2024-11-29 10:37:08.189587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:28.937 [2024-11-29 10:37:08.189601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.937 [2024-11-29 10:37:08.189614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:28.937 [2024-11-29 10:37:08.189643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:28.937 [2024-11-29 10:37:08.189658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.937 [2024-11-29 10:37:08.189672] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:28.937 [2024-11-29 10:37:08.189691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:28.937 [2024-11-29 10:37:08.189708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:28.937 [2024-11-29 10:37:08.189724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.937 [2024-11-29 10:37:08.189761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:28.937 [2024-11-29 10:37:08.189777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:28.937 [2024-11-29 10:37:08.189791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:28.937 [2024-11-29 10:37:08.189816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:28.937 [2024-11-29 10:37:08.189835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:28.937 [2024-11-29 10:37:08.189857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:28.937 [2024-11-29 10:37:08.189872] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:28.937 [2024-11-29 10:37:08.189919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:28.937 [2024-11-29 10:37:08.189943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:28.937 [2024-11-29 10:37:08.189964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:28.937 [2024-11-29 10:37:08.189985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:28.937 [2024-11-29 10:37:08.190007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:28.937 [2024-11-29 10:37:08.190048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:28.937 [2024-11-29 10:37:08.190070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:28.937 [2024-11-29 10:37:08.190111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:28.937 [2024-11-29 10:37:08.190134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:28.937 [2024-11-29 10:37:08.190156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:28.937 [2024-11-29 10:37:08.190195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:28.937 [2024-11-29 10:37:08.190218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:28.937 [2024-11-29 10:37:08.190245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:28.937 [2024-11-29 10:37:08.190291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:28.937 [2024-11-29 10:37:08.190315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:28.937 [2024-11-29 10:37:08.190336] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:28.937 [2024-11-29 10:37:08.190358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:28.937 [2024-11-29 10:37:08.190401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:28.937 [2024-11-29 10:37:08.190423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:28.937 [2024-11-29 10:37:08.190467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:28.937 [2024-11-29 10:37:08.190489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:28.937 [2024-11-29 10:37:08.190530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.937 [2024-11-29 10:37:08.190546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:28.937 [2024-11-29 10:37:08.190561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.801 ms 00:31:28.937 [2024-11-29 10:37:08.190576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.937 [2024-11-29 10:37:08.195928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.937 [2024-11-29 10:37:08.196009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:28.937 [2024-11-29 10:37:08.196046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.293 ms 00:31:28.937 [2024-11-29 10:37:08.196063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.937 [2024-11-29 10:37:08.196130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.937 [2024-11-29 10:37:08.196173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:28.937 [2024-11-29 10:37:08.196190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:31:28.937 [2024-11-29 10:37:08.196204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.937 [2024-11-29 10:37:08.214978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.937 [2024-11-29 10:37:08.215123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:28.937 [2024-11-29 10:37:08.215188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.713 ms 00:31:28.937 [2024-11-29 10:37:08.215215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.937 [2024-11-29 10:37:08.215271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.937 [2024-11-29 10:37:08.215299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:28.937 [2024-11-29 10:37:08.215323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:28.937 [2024-11-29 10:37:08.215345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.937 [2024-11-29 10:37:08.215467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.937 [2024-11-29 10:37:08.215594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:28.937 [2024-11-29 10:37:08.215623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:28.937 [2024-11-29 10:37:08.215646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.937 [2024-11-29 10:37:08.215817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.937 [2024-11-29 10:37:08.215854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:28.937 [2024-11-29 10:37:08.215879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:31:28.937 [2024-11-29 10:37:08.215905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.937 [2024-11-29 10:37:08.221066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.937 [2024-11-29 10:37:08.221096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:28.937 [2024-11-29 10:37:08.221115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.082 ms 00:31:28.937 [2024-11-29 10:37:08.221122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.937 [2024-11-29 10:37:08.221213] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:28.937 [2024-11-29 10:37:08.221230] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:28.937 [2024-11-29 10:37:08.221240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.937 [2024-11-29 10:37:08.221248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:28.937 [2024-11-29 10:37:08.221260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:31:28.937 [2024-11-29 10:37:08.221270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.937 [2024-11-29 10:37:08.232986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.937 [2024-11-29 10:37:08.233017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:28.937 [2024-11-29 10:37:08.233025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.700 ms 00:31:28.937 [2024-11-29 10:37:08.233031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.937 [2024-11-29 10:37:08.233117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.937 [2024-11-29 10:37:08.233124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:28.937 [2024-11-29 10:37:08.233129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:31:28.937 [2024-11-29 10:37:08.233136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.937 [2024-11-29 10:37:08.233170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.937 [2024-11-29 10:37:08.233179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:28.937 [2024-11-29 10:37:08.233185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:28.937 [2024-11-29 10:37:08.233191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.937 [2024-11-29 10:37:08.233406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.937 [2024-11-29 10:37:08.233413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:28.937 [2024-11-29 10:37:08.233422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:31:28.937 [2024-11-29 10:37:08.233428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.937 [2024-11-29 10:37:08.233441] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:28.937 [2024-11-29 10:37:08.233449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.937 [2024-11-29 10:37:08.233456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:28.937 [2024-11-29 10:37:08.233462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:28.937 [2024-11-29 10:37:08.233467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.937 [2024-11-29 10:37:08.239938] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:28.937 [2024-11-29 10:37:08.240109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.937 [2024-11-29 10:37:08.240120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:28.937 [2024-11-29 10:37:08.240127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.630 ms 00:31:28.937 [2024-11-29 10:37:08.240133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.937 [2024-11-29 10:37:08.241891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.937 [2024-11-29 10:37:08.241913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:28.938 [2024-11-29 10:37:08.241920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.741 ms 00:31:28.938 [2024-11-29 10:37:08.241926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.938 [2024-11-29 10:37:08.241980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.938 [2024-11-29 10:37:08.241987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:28.938 [2024-11-29 10:37:08.241993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:28.938 [2024-11-29 10:37:08.242000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.938 [2024-11-29 10:37:08.242016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.938 [2024-11-29 10:37:08.242022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:28.938 [2024-11-29 10:37:08.242027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:28.938 [2024-11-29 10:37:08.242037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.938 [2024-11-29 10:37:08.242057] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:28.938 [2024-11-29 10:37:08.242063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.938 [2024-11-29 10:37:08.242072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:28.938 [2024-11-29 10:37:08.242079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:28.938 [2024-11-29 10:37:08.242085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.938 [2024-11-29 10:37:08.245288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.938 [2024-11-29 10:37:08.245316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:28.938 [2024-11-29 10:37:08.245324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.187 ms 00:31:28.938 [2024-11-29 10:37:08.245329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.938 [2024-11-29 10:37:08.245381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.938 [2024-11-29 10:37:08.245388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:28.938 [2024-11-29 10:37:08.245394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:28.938 [2024-11-29 10:37:08.245399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.938 [2024-11-29 10:37:08.246082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 60.699 ms, result 0 00:31:30.313  [2024-11-29T10:37:10.711Z] Copying: 48/1024 [MB] (48 MBps) [2024-11-29T10:37:11.651Z] Copying: 100/1024 [MB] (51 MBps) [2024-11-29T10:37:12.592Z] Copying: 144/1024 [MB] (44 MBps) [2024-11-29T10:37:13.533Z] Copying: 168/1024 [MB] (23 MBps) [2024-11-29T10:37:14.566Z] Copying: 181/1024 [MB] (13 MBps) [2024-11-29T10:37:15.509Z] Copying: 193/1024 [MB] (12 MBps) [2024-11-29T10:37:16.450Z] Copying: 210/1024 [MB] (16 MBps) [2024-11-29T10:37:17.391Z] Copying: 226/1024 [MB] (15 MBps) [2024-11-29T10:37:18.771Z] Copying: 249/1024 [MB] (23 MBps) [2024-11-29T10:37:19.717Z] Copying: 261/1024 [MB] (11 MBps) [2024-11-29T10:37:20.658Z] Copying: 277/1024 [MB] (16 MBps) [2024-11-29T10:37:21.602Z] Copying: 296/1024 [MB] (18 MBps) [2024-11-29T10:37:22.546Z] Copying: 313/1024 [MB] (17 MBps) [2024-11-29T10:37:23.490Z] Copying: 330/1024 [MB] (17 MBps) [2024-11-29T10:37:24.435Z] Copying: 349/1024 [MB] (19 MBps) [2024-11-29T10:37:25.821Z] Copying: 370/1024 [MB] (20 MBps) [2024-11-29T10:37:26.505Z] Copying: 397/1024 [MB] (27 MBps) [2024-11-29T10:37:27.451Z] Copying: 410/1024 [MB] (12 MBps) [2024-11-29T10:37:28.397Z] Copying: 421/1024 [MB] (11 MBps) [2024-11-29T10:37:29.785Z] Copying: 433/1024 [MB] (11 MBps) [2024-11-29T10:37:30.730Z] Copying: 447/1024 [MB] (13 MBps) [2024-11-29T10:37:31.675Z] Copying: 468/1024 [MB] (21 MBps) [2024-11-29T10:37:32.620Z] Copying: 488/1024 [MB] (20 MBps) [2024-11-29T10:37:33.565Z] Copying: 504/1024 [MB] (15 MBps) [2024-11-29T10:37:34.511Z] Copying: 519/1024 [MB] (15 MBps) [2024-11-29T10:37:35.455Z] Copying: 532/1024 [MB] (12 MBps) [2024-11-29T10:37:36.392Z] Copying: 544/1024 [MB] (12 MBps) [2024-11-29T10:37:37.775Z] Copying: 562/1024 [MB] (18 MBps) [2024-11-29T10:37:38.716Z] Copying: 584/1024 [MB] (21 MBps) [2024-11-29T10:37:39.688Z] Copying: 595/1024 [MB] (11 MBps) [2024-11-29T10:37:40.637Z] Copying: 606/1024 [MB] (11 MBps) [2024-11-29T10:37:41.582Z] Copying: 626/1024 [MB] (19 MBps) [2024-11-29T10:37:42.524Z] Copying: 644/1024 [MB] (17 MBps) [2024-11-29T10:37:43.467Z] Copying: 662/1024 [MB] (18 MBps) [2024-11-29T10:37:44.411Z] Copying: 674/1024 [MB] (12 MBps) [2024-11-29T10:37:45.802Z] Copying: 691/1024 [MB] (17 MBps) [2024-11-29T10:37:46.748Z] Copying: 706/1024 [MB] (15 MBps) [2024-11-29T10:37:47.687Z] Copying: 723/1024 [MB] (16 MBps) [2024-11-29T10:37:48.620Z] Copying: 744/1024 [MB] (21 MBps) [2024-11-29T10:37:49.559Z] Copying: 779/1024 [MB] (34 MBps) [2024-11-29T10:37:50.502Z] Copying: 804/1024 [MB] (25 MBps) [2024-11-29T10:37:51.445Z] Copying: 818/1024 [MB] (13 MBps) [2024-11-29T10:37:52.390Z] Copying: 837/1024 [MB] (19 MBps) [2024-11-29T10:37:53.788Z] Copying: 853/1024 [MB] (16 MBps) [2024-11-29T10:37:54.730Z] Copying: 873/1024 [MB] (20 MBps) [2024-11-29T10:37:55.669Z] Copying: 889/1024 [MB] (15 MBps) [2024-11-29T10:37:56.609Z] Copying: 902/1024 [MB] (12 MBps) [2024-11-29T10:37:57.552Z] Copying: 915/1024 [MB] (13 MBps) [2024-11-29T10:37:58.494Z] Copying: 929/1024 [MB] (13 MBps) [2024-11-29T10:37:59.443Z] Copying: 949/1024 [MB] (20 MBps) [2024-11-29T10:38:00.422Z] Copying: 970/1024 [MB] (20 MBps) [2024-11-29T10:38:01.812Z] Copying: 998/1024 [MB] (28 MBps) [2024-11-29T10:38:01.812Z] Copying: 1019/1024 [MB] (21 MBps) [2024-11-29T10:38:01.812Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-29 10:38:01.751108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.347 [2024-11-29 10:38:01.751424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:22.347 [2024-11-29 10:38:01.751458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:22.347 [2024-11-29 10:38:01.751474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.347 [2024-11-29 10:38:01.751846] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:22.347 [2024-11-29 10:38:01.752432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.347 [2024-11-29 10:38:01.752465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:22.347 [2024-11-29 10:38:01.752481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:32:22.347 [2024-11-29 10:38:01.752495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.347 [2024-11-29 10:38:01.752908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.347 [2024-11-29 10:38:01.752928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:22.347 [2024-11-29 10:38:01.752942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:32:22.347 [2024-11-29 10:38:01.752956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.347 [2024-11-29 10:38:01.753011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.347 [2024-11-29 10:38:01.753027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:22.347 [2024-11-29 10:38:01.753042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:22.347 [2024-11-29 10:38:01.753061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.347 [2024-11-29 10:38:01.753195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.347 [2024-11-29 10:38:01.753211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:22.347 [2024-11-29 10:38:01.753225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:22.347 [2024-11-29 10:38:01.753238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.347 [2024-11-29 10:38:01.753262] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:22.347 [2024-11-29 10:38:01.753283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:22.347 [2024-11-29 10:38:01.753567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.753997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:22.348 [2024-11-29 10:38:01.754621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:22.349 [2024-11-29 10:38:01.754635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:22.349 [2024-11-29 10:38:01.754649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:22.349 [2024-11-29 10:38:01.754663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:22.349 [2024-11-29 10:38:01.754678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:22.349 [2024-11-29 10:38:01.754692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:22.349 [2024-11-29 10:38:01.754706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:22.349 [2024-11-29 10:38:01.754720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:22.349 [2024-11-29 10:38:01.754734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:22.349 [2024-11-29 10:38:01.754749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:22.349 [2024-11-29 10:38:01.754858] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:22.349 [2024-11-29 10:38:01.754875] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e481aca-6577-46d9-82fb-5f3ca05f74f2 00:32:22.349 [2024-11-29 10:38:01.754890] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:22.349 [2024-11-29 10:38:01.754903] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:22.349 [2024-11-29 10:38:01.754917] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:22.349 [2024-11-29 10:38:01.754931] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:22.349 [2024-11-29 10:38:01.754948] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:22.349 [2024-11-29 10:38:01.754962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:22.349 [2024-11-29 10:38:01.754976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:22.349 [2024-11-29 10:38:01.754987] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:22.349 [2024-11-29 10:38:01.754999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:22.349 [2024-11-29 10:38:01.755013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.349 [2024-11-29 10:38:01.755027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:22.349 [2024-11-29 10:38:01.755042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:32:22.349 [2024-11-29 10:38:01.755063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.349 [2024-11-29 10:38:01.757788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.349 [2024-11-29 10:38:01.757885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:22.349 [2024-11-29 10:38:01.757902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.698 ms 00:32:22.349 [2024-11-29 10:38:01.757915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.349 [2024-11-29 10:38:01.758028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.349 [2024-11-29 10:38:01.758044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:22.349 [2024-11-29 10:38:01.758063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:32:22.349 [2024-11-29 10:38:01.758076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.349 [2024-11-29 10:38:01.762834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.349 [2024-11-29 10:38:01.762861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:22.349 [2024-11-29 10:38:01.762870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.349 [2024-11-29 10:38:01.762877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.349 [2024-11-29 10:38:01.762927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.349 [2024-11-29 10:38:01.762935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:22.349 [2024-11-29 10:38:01.762945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.349 [2024-11-29 10:38:01.762952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.349 [2024-11-29 10:38:01.762998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.349 [2024-11-29 10:38:01.763007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:22.349 [2024-11-29 10:38:01.763015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.349 [2024-11-29 10:38:01.763023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.349 [2024-11-29 10:38:01.763037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.349 [2024-11-29 10:38:01.763045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:22.349 [2024-11-29 10:38:01.763056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.349 [2024-11-29 10:38:01.763065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.349 [2024-11-29 10:38:01.771534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.349 [2024-11-29 10:38:01.771571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:22.349 [2024-11-29 10:38:01.771586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.349 [2024-11-29 10:38:01.771594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.349 [2024-11-29 10:38:01.778962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.349 [2024-11-29 10:38:01.778997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:22.349 [2024-11-29 10:38:01.779011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.349 [2024-11-29 10:38:01.779019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.349 [2024-11-29 10:38:01.779039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.349 [2024-11-29 10:38:01.779047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:22.349 [2024-11-29 10:38:01.779054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.349 [2024-11-29 10:38:01.779061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.349 [2024-11-29 10:38:01.779100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.349 [2024-11-29 10:38:01.779108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:22.349 [2024-11-29 10:38:01.779117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.349 [2024-11-29 10:38:01.779124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.349 [2024-11-29 10:38:01.779174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.349 [2024-11-29 10:38:01.779183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:22.349 [2024-11-29 10:38:01.779191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.349 [2024-11-29 10:38:01.779198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.349 [2024-11-29 10:38:01.779223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.349 [2024-11-29 10:38:01.779231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:22.349 [2024-11-29 10:38:01.779239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.349 [2024-11-29 10:38:01.779245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.349 [2024-11-29 10:38:01.779280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.349 [2024-11-29 10:38:01.779288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:22.349 [2024-11-29 10:38:01.779296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.349 [2024-11-29 10:38:01.779303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.349 [2024-11-29 10:38:01.779339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.349 [2024-11-29 10:38:01.779349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:22.349 [2024-11-29 10:38:01.779356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.349 [2024-11-29 10:38:01.779363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.349 [2024-11-29 10:38:01.779478] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 28.356 ms, result 0 00:32:22.608 00:32:22.608 00:32:22.608 10:38:01 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:25.152 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:25.152 10:38:04 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:32:25.152 [2024-11-29 10:38:04.152669] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:32:25.152 [2024-11-29 10:38:04.152789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95852 ] 00:32:25.152 [2024-11-29 10:38:04.294269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.152 [2024-11-29 10:38:04.313201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.152 [2024-11-29 10:38:04.400730] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:25.152 [2024-11-29 10:38:04.400794] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:25.152 [2024-11-29 10:38:04.554295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.152 [2024-11-29 10:38:04.554468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:25.152 [2024-11-29 10:38:04.554493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:25.152 [2024-11-29 10:38:04.554502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.152 [2024-11-29 10:38:04.554553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.152 [2024-11-29 10:38:04.554562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:25.152 [2024-11-29 10:38:04.554571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:25.152 [2024-11-29 10:38:04.554583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.152 [2024-11-29 10:38:04.554609] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:25.152 [2024-11-29 10:38:04.554852] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:25.152 [2024-11-29 10:38:04.554869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.152 [2024-11-29 10:38:04.554877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:25.152 [2024-11-29 10:38:04.554891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:32:25.152 [2024-11-29 10:38:04.554898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.152 [2024-11-29 10:38:04.555155] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:25.152 [2024-11-29 10:38:04.555181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.152 [2024-11-29 10:38:04.555190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:25.153 [2024-11-29 10:38:04.555199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:32:25.153 [2024-11-29 10:38:04.555209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.153 [2024-11-29 10:38:04.555253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.153 [2024-11-29 10:38:04.555266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:25.153 [2024-11-29 10:38:04.555274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:32:25.153 [2024-11-29 10:38:04.555288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.153 [2024-11-29 10:38:04.555517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.153 [2024-11-29 10:38:04.555534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:25.153 [2024-11-29 10:38:04.555543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:32:25.153 [2024-11-29 10:38:04.555552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.153 [2024-11-29 10:38:04.555625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.153 [2024-11-29 10:38:04.555635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:25.153 [2024-11-29 10:38:04.555643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:32:25.153 [2024-11-29 10:38:04.555653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.153 [2024-11-29 10:38:04.555672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.153 [2024-11-29 10:38:04.555679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:25.153 [2024-11-29 10:38:04.555687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:25.153 [2024-11-29 10:38:04.555693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.153 [2024-11-29 10:38:04.555710] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:25.153 [2024-11-29 10:38:04.557147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.153 [2024-11-29 10:38:04.557165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:25.153 [2024-11-29 10:38:04.557180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:32:25.153 [2024-11-29 10:38:04.557187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.153 [2024-11-29 10:38:04.557217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.153 [2024-11-29 10:38:04.557230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:25.153 [2024-11-29 10:38:04.557238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:25.153 [2024-11-29 10:38:04.557245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.153 [2024-11-29 10:38:04.557261] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:25.153 [2024-11-29 10:38:04.557284] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:25.153 [2024-11-29 10:38:04.557320] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:25.153 [2024-11-29 10:38:04.557335] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:25.153 [2024-11-29 10:38:04.557435] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:25.153 [2024-11-29 10:38:04.557445] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:25.153 [2024-11-29 10:38:04.557456] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:25.153 [2024-11-29 10:38:04.557469] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:25.153 [2024-11-29 10:38:04.557482] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:25.153 [2024-11-29 10:38:04.557491] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:25.153 [2024-11-29 10:38:04.557498] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:25.153 [2024-11-29 10:38:04.557505] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:25.153 [2024-11-29 10:38:04.557512] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:25.153 [2024-11-29 10:38:04.557519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.153 [2024-11-29 10:38:04.557526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:25.153 [2024-11-29 10:38:04.557534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:32:25.153 [2024-11-29 10:38:04.557544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.153 [2024-11-29 10:38:04.557625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.153 [2024-11-29 10:38:04.557634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:25.153 [2024-11-29 10:38:04.557646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:32:25.153 [2024-11-29 10:38:04.557653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.153 [2024-11-29 10:38:04.557762] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:25.153 [2024-11-29 10:38:04.557773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:25.153 [2024-11-29 10:38:04.557783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:25.153 [2024-11-29 10:38:04.557792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:25.153 [2024-11-29 10:38:04.557956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:25.153 [2024-11-29 10:38:04.557989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:25.153 [2024-11-29 10:38:04.558014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:25.153 [2024-11-29 10:38:04.558034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:25.153 [2024-11-29 10:38:04.558055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:25.153 [2024-11-29 10:38:04.558075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:25.153 [2024-11-29 10:38:04.558096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:25.153 [2024-11-29 10:38:04.558116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:25.153 [2024-11-29 10:38:04.558137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:25.153 [2024-11-29 10:38:04.558159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:25.153 [2024-11-29 10:38:04.558179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:25.153 [2024-11-29 10:38:04.558200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:25.153 [2024-11-29 10:38:04.558221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:25.153 [2024-11-29 10:38:04.558241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:25.153 [2024-11-29 10:38:04.558305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:25.153 [2024-11-29 10:38:04.558328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:25.153 [2024-11-29 10:38:04.558347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:25.153 [2024-11-29 10:38:04.558364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:25.153 [2024-11-29 10:38:04.558382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:25.153 [2024-11-29 10:38:04.558400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:25.153 [2024-11-29 10:38:04.558418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:25.153 [2024-11-29 10:38:04.558436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:25.153 [2024-11-29 10:38:04.558454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:25.153 [2024-11-29 10:38:04.558472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:25.153 [2024-11-29 10:38:04.558490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:25.153 [2024-11-29 10:38:04.558507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:25.153 [2024-11-29 10:38:04.558524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:25.153 [2024-11-29 10:38:04.558543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:25.153 [2024-11-29 10:38:04.558560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:25.153 [2024-11-29 10:38:04.558603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:25.153 [2024-11-29 10:38:04.558630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:25.153 [2024-11-29 10:38:04.558648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:25.153 [2024-11-29 10:38:04.558666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:25.153 [2024-11-29 10:38:04.558684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:25.153 [2024-11-29 10:38:04.558702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:25.153 [2024-11-29 10:38:04.558739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:25.153 [2024-11-29 10:38:04.558776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:25.153 [2024-11-29 10:38:04.558795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:25.153 [2024-11-29 10:38:04.558827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:25.153 [2024-11-29 10:38:04.558845] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:25.153 [2024-11-29 10:38:04.558865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:25.153 [2024-11-29 10:38:04.558883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:25.153 [2024-11-29 10:38:04.558938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:25.153 [2024-11-29 10:38:04.558960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:25.153 [2024-11-29 10:38:04.558979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:25.153 [2024-11-29 10:38:04.558998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:25.153 [2024-11-29 10:38:04.559020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:25.153 [2024-11-29 10:38:04.559114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:25.153 [2024-11-29 10:38:04.559132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:25.153 [2024-11-29 10:38:04.559152] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:25.153 [2024-11-29 10:38:04.559563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:25.153 [2024-11-29 10:38:04.559608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:25.153 [2024-11-29 10:38:04.559638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:25.154 [2024-11-29 10:38:04.559667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:25.154 [2024-11-29 10:38:04.559696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:25.154 [2024-11-29 10:38:04.559724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:25.154 [2024-11-29 10:38:04.559753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:25.154 [2024-11-29 10:38:04.559781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:25.154 [2024-11-29 10:38:04.559825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:25.154 [2024-11-29 10:38:04.559896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:25.154 [2024-11-29 10:38:04.559958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:25.154 [2024-11-29 10:38:04.560006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:25.154 [2024-11-29 10:38:04.560050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:25.154 [2024-11-29 10:38:04.560079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:25.154 [2024-11-29 10:38:04.560109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:25.154 [2024-11-29 10:38:04.560137] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:25.154 [2024-11-29 10:38:04.560194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:25.154 [2024-11-29 10:38:04.560225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:25.154 [2024-11-29 10:38:04.560253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:25.154 [2024-11-29 10:38:04.560281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:25.154 [2024-11-29 10:38:04.560310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:25.154 [2024-11-29 10:38:04.560362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.154 [2024-11-29 10:38:04.560407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:25.154 [2024-11-29 10:38:04.560430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.667 ms 00:32:25.154 [2024-11-29 10:38:04.560480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.154 [2024-11-29 10:38:04.566546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.154 [2024-11-29 10:38:04.566652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:25.154 [2024-11-29 10:38:04.566700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.971 ms 00:32:25.154 [2024-11-29 10:38:04.566722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.154 [2024-11-29 10:38:04.566831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.154 [2024-11-29 10:38:04.566856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:25.154 [2024-11-29 10:38:04.566877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:32:25.154 [2024-11-29 10:38:04.566896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.154 [2024-11-29 10:38:04.583382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.154 [2024-11-29 10:38:04.583510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:25.154 [2024-11-29 10:38:04.583567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.401 ms 00:32:25.154 [2024-11-29 10:38:04.583591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.154 [2024-11-29 10:38:04.583643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.154 [2024-11-29 10:38:04.583667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:25.154 [2024-11-29 10:38:04.583688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:25.154 [2024-11-29 10:38:04.583707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.154 [2024-11-29 10:38:04.583820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.154 [2024-11-29 10:38:04.583928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:25.154 [2024-11-29 10:38:04.583952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:25.154 [2024-11-29 10:38:04.583972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.154 [2024-11-29 10:38:04.584096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.154 [2024-11-29 10:38:04.584173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:25.154 [2024-11-29 10:38:04.584193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:32:25.154 [2024-11-29 10:38:04.584202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.154 [2024-11-29 10:38:04.589344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.154 [2024-11-29 10:38:04.589377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:25.154 [2024-11-29 10:38:04.589392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.122 ms 00:32:25.154 [2024-11-29 10:38:04.589405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.154 [2024-11-29 10:38:04.589507] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:25.154 [2024-11-29 10:38:04.589521] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:25.154 [2024-11-29 10:38:04.589532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.154 [2024-11-29 10:38:04.589540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:25.154 [2024-11-29 10:38:04.589549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:25.154 [2024-11-29 10:38:04.589560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.154 [2024-11-29 10:38:04.602847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.154 [2024-11-29 10:38:04.602872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:25.154 [2024-11-29 10:38:04.602882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.271 ms 00:32:25.154 [2024-11-29 10:38:04.602894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.154 [2024-11-29 10:38:04.603003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.154 [2024-11-29 10:38:04.603012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:25.154 [2024-11-29 10:38:04.603019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:32:25.154 [2024-11-29 10:38:04.603030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.154 [2024-11-29 10:38:04.603075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.154 [2024-11-29 10:38:04.603088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:25.154 [2024-11-29 10:38:04.603095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:25.154 [2024-11-29 10:38:04.603103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.154 [2024-11-29 10:38:04.603390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.154 [2024-11-29 10:38:04.603406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:25.154 [2024-11-29 10:38:04.603415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:32:25.154 [2024-11-29 10:38:04.603424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.154 [2024-11-29 10:38:04.603438] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:25.154 [2024-11-29 10:38:04.603449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.154 [2024-11-29 10:38:04.603458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:25.154 [2024-11-29 10:38:04.603465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:25.154 [2024-11-29 10:38:04.603473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.154 [2024-11-29 10:38:04.611389] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:25.154 [2024-11-29 10:38:04.611505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.154 [2024-11-29 10:38:04.611514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:25.154 [2024-11-29 10:38:04.611523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.016 ms 00:32:25.154 [2024-11-29 10:38:04.611531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.414 [2024-11-29 10:38:04.613809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.414 [2024-11-29 10:38:04.613845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:25.414 [2024-11-29 10:38:04.613856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.258 ms 00:32:25.414 [2024-11-29 10:38:04.613864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.414 [2024-11-29 10:38:04.613927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.414 [2024-11-29 10:38:04.613937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:25.414 [2024-11-29 10:38:04.613945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:25.414 [2024-11-29 10:38:04.613955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.414 [2024-11-29 10:38:04.613990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.414 [2024-11-29 10:38:04.613999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:25.414 [2024-11-29 10:38:04.614006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:25.414 [2024-11-29 10:38:04.614014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.414 [2024-11-29 10:38:04.614047] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:25.414 [2024-11-29 10:38:04.614060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.414 [2024-11-29 10:38:04.614067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:25.414 [2024-11-29 10:38:04.614075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:25.414 [2024-11-29 10:38:04.614082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.414 [2024-11-29 10:38:04.618257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.414 [2024-11-29 10:38:04.618289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:25.414 [2024-11-29 10:38:04.618299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.153 ms 00:32:25.414 [2024-11-29 10:38:04.618306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.414 [2024-11-29 10:38:04.618372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.414 [2024-11-29 10:38:04.618383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:25.414 [2024-11-29 10:38:04.618392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:25.414 [2024-11-29 10:38:04.618399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.414 [2024-11-29 10:38:04.619333] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 64.641 ms, result 0 00:32:26.347  [2024-11-29T10:38:06.753Z] Copying: 34/1024 [MB] (34 MBps) [2024-11-29T10:38:07.699Z] Copying: 57/1024 [MB] (22 MBps) [2024-11-29T10:38:08.685Z] Copying: 71/1024 [MB] (14 MBps) [2024-11-29T10:38:10.073Z] Copying: 85/1024 [MB] (14 MBps) [2024-11-29T10:38:10.647Z] Copying: 99/1024 [MB] (13 MBps) [2024-11-29T10:38:12.031Z] Copying: 119/1024 [MB] (19 MBps) [2024-11-29T10:38:12.975Z] Copying: 152/1024 [MB] (33 MBps) [2024-11-29T10:38:13.926Z] Copying: 176/1024 [MB] (23 MBps) [2024-11-29T10:38:14.871Z] Copying: 195/1024 [MB] (19 MBps) [2024-11-29T10:38:15.817Z] Copying: 215/1024 [MB] (20 MBps) [2024-11-29T10:38:16.762Z] Copying: 234/1024 [MB] (18 MBps) [2024-11-29T10:38:17.706Z] Copying: 246/1024 [MB] (11 MBps) [2024-11-29T10:38:18.649Z] Copying: 258/1024 [MB] (12 MBps) [2024-11-29T10:38:20.037Z] Copying: 273/1024 [MB] (15 MBps) [2024-11-29T10:38:20.981Z] Copying: 290/1024 [MB] (17 MBps) [2024-11-29T10:38:21.926Z] Copying: 303/1024 [MB] (12 MBps) [2024-11-29T10:38:22.871Z] Copying: 326/1024 [MB] (23 MBps) [2024-11-29T10:38:23.815Z] Copying: 340/1024 [MB] (13 MBps) [2024-11-29T10:38:24.760Z] Copying: 352/1024 [MB] (12 MBps) [2024-11-29T10:38:25.702Z] Copying: 372/1024 [MB] (19 MBps) [2024-11-29T10:38:26.647Z] Copying: 393/1024 [MB] (21 MBps) [2024-11-29T10:38:28.035Z] Copying: 408/1024 [MB] (14 MBps) [2024-11-29T10:38:28.981Z] Copying: 425/1024 [MB] (16 MBps) [2024-11-29T10:38:29.927Z] Copying: 437/1024 [MB] (12 MBps) [2024-11-29T10:38:30.871Z] Copying: 457/1024 [MB] (19 MBps) [2024-11-29T10:38:31.853Z] Copying: 469/1024 [MB] (11 MBps) [2024-11-29T10:38:32.811Z] Copying: 480/1024 [MB] (11 MBps) [2024-11-29T10:38:33.755Z] Copying: 523/1024 [MB] (42 MBps) [2024-11-29T10:38:34.701Z] Copying: 550/1024 [MB] (27 MBps) [2024-11-29T10:38:35.651Z] Copying: 562/1024 [MB] (11 MBps) [2024-11-29T10:38:37.038Z] Copying: 577/1024 [MB] (14 MBps) [2024-11-29T10:38:37.979Z] Copying: 594/1024 [MB] (16 MBps) [2024-11-29T10:38:38.920Z] Copying: 610/1024 [MB] (16 MBps) [2024-11-29T10:38:39.861Z] Copying: 623/1024 [MB] (13 MBps) [2024-11-29T10:38:40.803Z] Copying: 639/1024 [MB] (15 MBps) [2024-11-29T10:38:41.747Z] Copying: 658/1024 [MB] (19 MBps) [2024-11-29T10:38:42.692Z] Copying: 677/1024 [MB] (18 MBps) [2024-11-29T10:38:43.637Z] Copying: 694/1024 [MB] (17 MBps) [2024-11-29T10:38:45.024Z] Copying: 716/1024 [MB] (21 MBps) [2024-11-29T10:38:45.968Z] Copying: 735/1024 [MB] (19 MBps) [2024-11-29T10:38:46.913Z] Copying: 754/1024 [MB] (18 MBps) [2024-11-29T10:38:47.851Z] Copying: 773/1024 [MB] (18 MBps) [2024-11-29T10:38:48.792Z] Copying: 794/1024 [MB] (20 MBps) [2024-11-29T10:38:49.735Z] Copying: 813/1024 [MB] (19 MBps) [2024-11-29T10:38:50.677Z] Copying: 834/1024 [MB] (20 MBps) [2024-11-29T10:38:52.067Z] Copying: 855/1024 [MB] (20 MBps) [2024-11-29T10:38:52.680Z] Copying: 873/1024 [MB] (17 MBps) [2024-11-29T10:38:53.659Z] Copying: 884/1024 [MB] (11 MBps) [2024-11-29T10:38:55.047Z] Copying: 895/1024 [MB] (11 MBps) [2024-11-29T10:38:55.990Z] Copying: 906/1024 [MB] (11 MBps) [2024-11-29T10:38:56.935Z] Copying: 918/1024 [MB] (11 MBps) [2024-11-29T10:38:57.880Z] Copying: 929/1024 [MB] (11 MBps) [2024-11-29T10:38:58.825Z] Copying: 940/1024 [MB] (11 MBps) [2024-11-29T10:38:59.768Z] Copying: 952/1024 [MB] (11 MBps) [2024-11-29T10:39:00.714Z] Copying: 963/1024 [MB] (11 MBps) [2024-11-29T10:39:01.660Z] Copying: 974/1024 [MB] (11 MBps) [2024-11-29T10:39:03.047Z] Copying: 985/1024 [MB] (10 MBps) [2024-11-29T10:39:03.993Z] Copying: 1004/1024 [MB] (19 MBps) [2024-11-29T10:39:04.938Z] Copying: 1022/1024 [MB] (17 MBps) [2024-11-29T10:39:04.938Z] Copying: 1048400/1048576 [kB] (1744 kBps) [2024-11-29T10:39:04.938Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-29 10:39:04.838813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.473 [2024-11-29 10:39:04.838888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:25.473 [2024-11-29 10:39:04.838906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:25.473 [2024-11-29 10:39:04.838917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.473 [2024-11-29 10:39:04.842093] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:25.473 [2024-11-29 10:39:04.845295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.473 [2024-11-29 10:39:04.845464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:25.473 [2024-11-29 10:39:04.845525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.028 ms 00:33:25.473 [2024-11-29 10:39:04.845550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.473 [2024-11-29 10:39:04.856059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.473 [2024-11-29 10:39:04.856221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:25.473 [2024-11-29 10:39:04.856292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.435 ms 00:33:25.473 [2024-11-29 10:39:04.856317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.473 [2024-11-29 10:39:04.856375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.473 [2024-11-29 10:39:04.856399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:25.473 [2024-11-29 10:39:04.856421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:25.473 [2024-11-29 10:39:04.856441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.473 [2024-11-29 10:39:04.856512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.473 [2024-11-29 10:39:04.856745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:25.473 [2024-11-29 10:39:04.856771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:33:25.473 [2024-11-29 10:39:04.856792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.473 [2024-11-29 10:39:04.856847] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:25.473 [2024-11-29 10:39:04.856874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 125696 / 261120 wr_cnt: 1 state: open 00:33:25.473 [2024-11-29 10:39:04.856911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.856942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.856971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:25.473 [2024-11-29 10:39:04.857323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:25.474 [2024-11-29 10:39:04.857785] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:25.474 [2024-11-29 10:39:04.857834] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e481aca-6577-46d9-82fb-5f3ca05f74f2 00:33:25.474 [2024-11-29 10:39:04.857844] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 125696 00:33:25.474 [2024-11-29 10:39:04.857853] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 125728 00:33:25.474 [2024-11-29 10:39:04.857860] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 125696 00:33:25.474 [2024-11-29 10:39:04.857869] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:33:25.474 [2024-11-29 10:39:04.857881] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:25.474 [2024-11-29 10:39:04.857889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:25.474 [2024-11-29 10:39:04.857897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:25.474 [2024-11-29 10:39:04.857904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:25.474 [2024-11-29 10:39:04.857911] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:25.474 [2024-11-29 10:39:04.857919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.474 [2024-11-29 10:39:04.857927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:25.474 [2024-11-29 10:39:04.857935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.073 ms 00:33:25.474 [2024-11-29 10:39:04.857943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.474 [2024-11-29 10:39:04.860290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.474 [2024-11-29 10:39:04.860332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:25.474 [2024-11-29 10:39:04.860346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.330 ms 00:33:25.475 [2024-11-29 10:39:04.860354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.475 [2024-11-29 10:39:04.860470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.475 [2024-11-29 10:39:04.860479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:25.475 [2024-11-29 10:39:04.860488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:33:25.475 [2024-11-29 10:39:04.860496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.475 [2024-11-29 10:39:04.868438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.475 [2024-11-29 10:39:04.868494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:25.475 [2024-11-29 10:39:04.868505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.475 [2024-11-29 10:39:04.868513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.475 [2024-11-29 10:39:04.868570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.475 [2024-11-29 10:39:04.868579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:25.475 [2024-11-29 10:39:04.868594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.475 [2024-11-29 10:39:04.868601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.475 [2024-11-29 10:39:04.868670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.475 [2024-11-29 10:39:04.868685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:25.475 [2024-11-29 10:39:04.868696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.475 [2024-11-29 10:39:04.868704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.475 [2024-11-29 10:39:04.868722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.475 [2024-11-29 10:39:04.868730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:25.475 [2024-11-29 10:39:04.868738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.475 [2024-11-29 10:39:04.868745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.475 [2024-11-29 10:39:04.883504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.475 [2024-11-29 10:39:04.883707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:25.475 [2024-11-29 10:39:04.883728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.475 [2024-11-29 10:39:04.883737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.475 [2024-11-29 10:39:04.896136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.475 [2024-11-29 10:39:04.896194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:25.475 [2024-11-29 10:39:04.896207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.475 [2024-11-29 10:39:04.896223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.475 [2024-11-29 10:39:04.896286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.475 [2024-11-29 10:39:04.896295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:25.475 [2024-11-29 10:39:04.896305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.475 [2024-11-29 10:39:04.896317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.475 [2024-11-29 10:39:04.896354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.475 [2024-11-29 10:39:04.896364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:25.475 [2024-11-29 10:39:04.896372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.475 [2024-11-29 10:39:04.896380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.475 [2024-11-29 10:39:04.896438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.475 [2024-11-29 10:39:04.896448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:25.475 [2024-11-29 10:39:04.896457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.475 [2024-11-29 10:39:04.896468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.475 [2024-11-29 10:39:04.896492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.475 [2024-11-29 10:39:04.896501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:25.475 [2024-11-29 10:39:04.896516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.475 [2024-11-29 10:39:04.896523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.475 [2024-11-29 10:39:04.896564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.475 [2024-11-29 10:39:04.896574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:25.475 [2024-11-29 10:39:04.896583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.475 [2024-11-29 10:39:04.896591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.475 [2024-11-29 10:39:04.896646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.475 [2024-11-29 10:39:04.896658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:25.475 [2024-11-29 10:39:04.896666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.475 [2024-11-29 10:39:04.896674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.475 [2024-11-29 10:39:04.896839] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 59.492 ms, result 0 00:33:26.426 00:33:26.426 00:33:26.426 10:39:05 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:33:26.685 [2024-11-29 10:39:05.890754] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:33:26.685 [2024-11-29 10:39:05.890894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96489 ] 00:33:26.685 [2024-11-29 10:39:06.036697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.685 [2024-11-29 10:39:06.056010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.685 [2024-11-29 10:39:06.144767] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:26.685 [2024-11-29 10:39:06.144849] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:26.947 [2024-11-29 10:39:06.301073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.947 [2024-11-29 10:39:06.301232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:26.947 [2024-11-29 10:39:06.301252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:26.947 [2024-11-29 10:39:06.301261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.947 [2024-11-29 10:39:06.301316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.947 [2024-11-29 10:39:06.301329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:26.947 [2024-11-29 10:39:06.301435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:26.947 [2024-11-29 10:39:06.301447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.947 [2024-11-29 10:39:06.301476] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:26.947 [2024-11-29 10:39:06.301759] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:26.947 [2024-11-29 10:39:06.301780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.947 [2024-11-29 10:39:06.301792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:26.947 [2024-11-29 10:39:06.301833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:33:26.947 [2024-11-29 10:39:06.301842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.947 [2024-11-29 10:39:06.302102] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:26.947 [2024-11-29 10:39:06.302127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.947 [2024-11-29 10:39:06.302139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:26.947 [2024-11-29 10:39:06.302148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:26.947 [2024-11-29 10:39:06.302159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.947 [2024-11-29 10:39:06.302204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.947 [2024-11-29 10:39:06.302213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:26.947 [2024-11-29 10:39:06.302222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:26.947 [2024-11-29 10:39:06.302232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.947 [2024-11-29 10:39:06.302460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.947 [2024-11-29 10:39:06.302470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:26.947 [2024-11-29 10:39:06.302478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:33:26.947 [2024-11-29 10:39:06.302485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.947 [2024-11-29 10:39:06.302559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.947 [2024-11-29 10:39:06.302569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:26.947 [2024-11-29 10:39:06.302576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:33:26.947 [2024-11-29 10:39:06.302583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.947 [2024-11-29 10:39:06.302605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.947 [2024-11-29 10:39:06.302613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:26.947 [2024-11-29 10:39:06.302620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:26.947 [2024-11-29 10:39:06.302627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.947 [2024-11-29 10:39:06.302643] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:26.947 [2024-11-29 10:39:06.304075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.947 [2024-11-29 10:39:06.304096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:26.947 [2024-11-29 10:39:06.304108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.436 ms 00:33:26.947 [2024-11-29 10:39:06.304115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.947 [2024-11-29 10:39:06.304149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.947 [2024-11-29 10:39:06.304157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:26.947 [2024-11-29 10:39:06.304164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:26.947 [2024-11-29 10:39:06.304177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.947 [2024-11-29 10:39:06.304193] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:26.947 [2024-11-29 10:39:06.304215] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:26.947 [2024-11-29 10:39:06.304252] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:26.947 [2024-11-29 10:39:06.304266] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:26.947 [2024-11-29 10:39:06.304366] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:26.947 [2024-11-29 10:39:06.304414] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:26.947 [2024-11-29 10:39:06.304425] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:26.947 [2024-11-29 10:39:06.304439] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:26.947 [2024-11-29 10:39:06.304450] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:26.947 [2024-11-29 10:39:06.304457] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:26.947 [2024-11-29 10:39:06.304464] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:26.947 [2024-11-29 10:39:06.304471] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:26.947 [2024-11-29 10:39:06.304478] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:26.947 [2024-11-29 10:39:06.304488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.947 [2024-11-29 10:39:06.304495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:26.947 [2024-11-29 10:39:06.304502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:33:26.947 [2024-11-29 10:39:06.304508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.947 [2024-11-29 10:39:06.304593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.947 [2024-11-29 10:39:06.304600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:26.947 [2024-11-29 10:39:06.304611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:26.947 [2024-11-29 10:39:06.304618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.947 [2024-11-29 10:39:06.304724] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:26.947 [2024-11-29 10:39:06.304734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:26.947 [2024-11-29 10:39:06.304742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:26.947 [2024-11-29 10:39:06.304749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:26.947 [2024-11-29 10:39:06.304756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:26.947 [2024-11-29 10:39:06.304762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:26.947 [2024-11-29 10:39:06.304769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:26.947 [2024-11-29 10:39:06.304776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:26.947 [2024-11-29 10:39:06.304782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:26.947 [2024-11-29 10:39:06.304788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:26.947 [2024-11-29 10:39:06.304794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:26.948 [2024-11-29 10:39:06.304814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:26.948 [2024-11-29 10:39:06.304822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:26.948 [2024-11-29 10:39:06.304830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:26.948 [2024-11-29 10:39:06.304836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:26.948 [2024-11-29 10:39:06.304842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:26.948 [2024-11-29 10:39:06.304849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:26.948 [2024-11-29 10:39:06.304856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:26.948 [2024-11-29 10:39:06.304862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:26.948 [2024-11-29 10:39:06.304869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:26.948 [2024-11-29 10:39:06.304876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:26.948 [2024-11-29 10:39:06.304883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:26.948 [2024-11-29 10:39:06.304889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:26.948 [2024-11-29 10:39:06.304896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:26.948 [2024-11-29 10:39:06.304903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:26.948 [2024-11-29 10:39:06.304909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:26.948 [2024-11-29 10:39:06.304915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:26.948 [2024-11-29 10:39:06.304923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:26.948 [2024-11-29 10:39:06.304930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:26.948 [2024-11-29 10:39:06.304936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:26.948 [2024-11-29 10:39:06.304942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:26.948 [2024-11-29 10:39:06.304949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:26.948 [2024-11-29 10:39:06.304955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:26.948 [2024-11-29 10:39:06.304961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:26.948 [2024-11-29 10:39:06.304967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:26.948 [2024-11-29 10:39:06.304973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:26.948 [2024-11-29 10:39:06.304980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:26.948 [2024-11-29 10:39:06.304986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:26.948 [2024-11-29 10:39:06.304993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:26.948 [2024-11-29 10:39:06.304999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:26.948 [2024-11-29 10:39:06.305005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:26.948 [2024-11-29 10:39:06.305011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:26.948 [2024-11-29 10:39:06.305017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:26.948 [2024-11-29 10:39:06.305025] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:26.948 [2024-11-29 10:39:06.305033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:26.948 [2024-11-29 10:39:06.305041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:26.948 [2024-11-29 10:39:06.305049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:26.948 [2024-11-29 10:39:06.305058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:26.948 [2024-11-29 10:39:06.305066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:26.948 [2024-11-29 10:39:06.305073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:26.948 [2024-11-29 10:39:06.305081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:26.948 [2024-11-29 10:39:06.305088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:26.948 [2024-11-29 10:39:06.305095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:26.948 [2024-11-29 10:39:06.305104] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:26.948 [2024-11-29 10:39:06.305114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:26.948 [2024-11-29 10:39:06.305124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:26.948 [2024-11-29 10:39:06.305132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:26.948 [2024-11-29 10:39:06.305140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:26.948 [2024-11-29 10:39:06.305147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:26.948 [2024-11-29 10:39:06.305157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:26.948 [2024-11-29 10:39:06.305165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:26.948 [2024-11-29 10:39:06.305173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:26.948 [2024-11-29 10:39:06.305181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:26.948 [2024-11-29 10:39:06.305188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:26.948 [2024-11-29 10:39:06.305196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:26.948 [2024-11-29 10:39:06.305204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:26.948 [2024-11-29 10:39:06.305216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:26.948 [2024-11-29 10:39:06.305224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:26.948 [2024-11-29 10:39:06.305233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:26.948 [2024-11-29 10:39:06.305241] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:26.948 [2024-11-29 10:39:06.305249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:26.948 [2024-11-29 10:39:06.305257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:26.948 [2024-11-29 10:39:06.305265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:26.948 [2024-11-29 10:39:06.305273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:26.948 [2024-11-29 10:39:06.305281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:26.948 [2024-11-29 10:39:06.305291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.948 [2024-11-29 10:39:06.305302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:26.948 [2024-11-29 10:39:06.305311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:33:26.948 [2024-11-29 10:39:06.305319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.948 [2024-11-29 10:39:06.311555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.948 [2024-11-29 10:39:06.311661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:26.948 [2024-11-29 10:39:06.311711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.178 ms 00:33:26.948 [2024-11-29 10:39:06.311733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.948 [2024-11-29 10:39:06.311841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.948 [2024-11-29 10:39:06.311865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:26.948 [2024-11-29 10:39:06.311895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:33:26.948 [2024-11-29 10:39:06.311914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.948 [2024-11-29 10:39:06.335491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.948 [2024-11-29 10:39:06.335834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:26.948 [2024-11-29 10:39:06.336005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.491 ms 00:33:26.948 [2024-11-29 10:39:06.336070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.948 [2024-11-29 10:39:06.336199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.948 [2024-11-29 10:39:06.336266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:26.948 [2024-11-29 10:39:06.336321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:26.948 [2024-11-29 10:39:06.336477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.948 [2024-11-29 10:39:06.336781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.948 [2024-11-29 10:39:06.336917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:26.948 [2024-11-29 10:39:06.337045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:33:26.948 [2024-11-29 10:39:06.337184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.948 [2024-11-29 10:39:06.337517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.948 [2024-11-29 10:39:06.337603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:26.948 [2024-11-29 10:39:06.337728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:33:26.948 [2024-11-29 10:39:06.337789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.948 [2024-11-29 10:39:06.343357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.948 [2024-11-29 10:39:06.343462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:26.948 [2024-11-29 10:39:06.343518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.368 ms 00:33:26.948 [2024-11-29 10:39:06.343539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.948 [2024-11-29 10:39:06.343646] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:26.949 [2024-11-29 10:39:06.343712] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:26.949 [2024-11-29 10:39:06.343743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.949 [2024-11-29 10:39:06.343766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:26.949 [2024-11-29 10:39:06.343844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:33:26.949 [2024-11-29 10:39:06.343871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.949 [2024-11-29 10:39:06.356127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.949 [2024-11-29 10:39:06.356220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:26.949 [2024-11-29 10:39:06.356278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.229 ms 00:33:26.949 [2024-11-29 10:39:06.356739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.949 [2024-11-29 10:39:06.356910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.949 [2024-11-29 10:39:06.356983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:26.949 [2024-11-29 10:39:06.357009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:33:26.949 [2024-11-29 10:39:06.357061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.949 [2024-11-29 10:39:06.357166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.949 [2024-11-29 10:39:06.357229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:26.949 [2024-11-29 10:39:06.357536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:26.949 [2024-11-29 10:39:06.357580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.949 [2024-11-29 10:39:06.358040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.949 [2024-11-29 10:39:06.358131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:26.949 [2024-11-29 10:39:06.358184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:33:26.949 [2024-11-29 10:39:06.358206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.949 [2024-11-29 10:39:06.358265] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:26.949 [2024-11-29 10:39:06.358300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.949 [2024-11-29 10:39:06.358746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:26.949 [2024-11-29 10:39:06.358794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:33:26.949 [2024-11-29 10:39:06.358896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.949 [2024-11-29 10:39:06.366933] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:26.949 [2024-11-29 10:39:06.367147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.949 [2024-11-29 10:39:06.367179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:26.949 [2024-11-29 10:39:06.367295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.212 ms 00:33:26.949 [2024-11-29 10:39:06.367322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.949 [2024-11-29 10:39:06.369704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.949 [2024-11-29 10:39:06.369830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:26.949 [2024-11-29 10:39:06.369898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.337 ms 00:33:26.949 [2024-11-29 10:39:06.369930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.949 [2024-11-29 10:39:06.370014] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:33:26.949 [2024-11-29 10:39:06.370598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.949 [2024-11-29 10:39:06.370682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:26.949 [2024-11-29 10:39:06.370733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:33:26.949 [2024-11-29 10:39:06.370757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.949 [2024-11-29 10:39:06.370794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.949 [2024-11-29 10:39:06.370827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:26.949 [2024-11-29 10:39:06.370846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:26.949 [2024-11-29 10:39:06.370865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.949 [2024-11-29 10:39:06.370913] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:26.949 [2024-11-29 10:39:06.370940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.949 [2024-11-29 10:39:06.370959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:26.949 [2024-11-29 10:39:06.371012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:26.949 [2024-11-29 10:39:06.371035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.949 [2024-11-29 10:39:06.375293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.949 [2024-11-29 10:39:06.375399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:26.949 [2024-11-29 10:39:06.375463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.227 ms 00:33:26.949 [2024-11-29 10:39:06.375487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.949 [2024-11-29 10:39:06.375602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:26.949 [2024-11-29 10:39:06.375642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:26.949 [2024-11-29 10:39:06.375707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:33:26.949 [2024-11-29 10:39:06.375717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:26.949 [2024-11-29 10:39:06.376574] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 75.138 ms, result 0 00:33:28.331  [2024-11-29T10:39:08.769Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-29T10:39:09.706Z] Copying: 23/1024 [MB] (11 MBps) [2024-11-29T10:39:10.646Z] Copying: 45/1024 [MB] (22 MBps) [2024-11-29T10:39:11.586Z] Copying: 70/1024 [MB] (24 MBps) [2024-11-29T10:39:12.975Z] Copying: 92/1024 [MB] (22 MBps) [2024-11-29T10:39:13.919Z] Copying: 112/1024 [MB] (20 MBps) [2024-11-29T10:39:14.862Z] Copying: 130/1024 [MB] (17 MBps) [2024-11-29T10:39:15.801Z] Copying: 154/1024 [MB] (24 MBps) [2024-11-29T10:39:16.742Z] Copying: 175/1024 [MB] (20 MBps) [2024-11-29T10:39:17.684Z] Copying: 200/1024 [MB] (25 MBps) [2024-11-29T10:39:18.625Z] Copying: 220/1024 [MB] (19 MBps) [2024-11-29T10:39:19.569Z] Copying: 233/1024 [MB] (13 MBps) [2024-11-29T10:39:20.956Z] Copying: 245/1024 [MB] (11 MBps) [2024-11-29T10:39:21.902Z] Copying: 262/1024 [MB] (17 MBps) [2024-11-29T10:39:22.847Z] Copying: 274/1024 [MB] (11 MBps) [2024-11-29T10:39:23.791Z] Copying: 286/1024 [MB] (12 MBps) [2024-11-29T10:39:24.793Z] Copying: 298/1024 [MB] (12 MBps) [2024-11-29T10:39:25.762Z] Copying: 312/1024 [MB] (13 MBps) [2024-11-29T10:39:26.706Z] Copying: 331/1024 [MB] (18 MBps) [2024-11-29T10:39:27.649Z] Copying: 348/1024 [MB] (17 MBps) [2024-11-29T10:39:28.594Z] Copying: 361/1024 [MB] (13 MBps) [2024-11-29T10:39:29.983Z] Copying: 378/1024 [MB] (16 MBps) [2024-11-29T10:39:30.924Z] Copying: 395/1024 [MB] (17 MBps) [2024-11-29T10:39:31.869Z] Copying: 415/1024 [MB] (20 MBps) [2024-11-29T10:39:32.815Z] Copying: 434/1024 [MB] (18 MBps) [2024-11-29T10:39:33.760Z] Copying: 445/1024 [MB] (11 MBps) [2024-11-29T10:39:34.705Z] Copying: 456/1024 [MB] (11 MBps) [2024-11-29T10:39:35.649Z] Copying: 468/1024 [MB] (11 MBps) [2024-11-29T10:39:36.593Z] Copying: 479/1024 [MB] (11 MBps) [2024-11-29T10:39:37.981Z] Copying: 491/1024 [MB] (11 MBps) [2024-11-29T10:39:38.927Z] Copying: 502/1024 [MB] (11 MBps) [2024-11-29T10:39:39.871Z] Copying: 514/1024 [MB] (11 MBps) [2024-11-29T10:39:40.889Z] Copying: 526/1024 [MB] (11 MBps) [2024-11-29T10:39:41.835Z] Copying: 537/1024 [MB] (11 MBps) [2024-11-29T10:39:42.778Z] Copying: 549/1024 [MB] (11 MBps) [2024-11-29T10:39:43.722Z] Copying: 561/1024 [MB] (11 MBps) [2024-11-29T10:39:44.666Z] Copying: 572/1024 [MB] (11 MBps) [2024-11-29T10:39:45.607Z] Copying: 583/1024 [MB] (11 MBps) [2024-11-29T10:39:46.997Z] Copying: 601/1024 [MB] (18 MBps) [2024-11-29T10:39:47.570Z] Copying: 615/1024 [MB] (14 MBps) [2024-11-29T10:39:48.958Z] Copying: 633/1024 [MB] (17 MBps) [2024-11-29T10:39:49.901Z] Copying: 658/1024 [MB] (24 MBps) [2024-11-29T10:39:50.847Z] Copying: 684/1024 [MB] (25 MBps) [2024-11-29T10:39:51.794Z] Copying: 702/1024 [MB] (18 MBps) [2024-11-29T10:39:52.737Z] Copying: 725/1024 [MB] (22 MBps) [2024-11-29T10:39:53.680Z] Copying: 744/1024 [MB] (19 MBps) [2024-11-29T10:39:54.623Z] Copying: 770/1024 [MB] (25 MBps) [2024-11-29T10:39:55.594Z] Copying: 797/1024 [MB] (27 MBps) [2024-11-29T10:39:56.982Z] Copying: 816/1024 [MB] (18 MBps) [2024-11-29T10:39:57.926Z] Copying: 837/1024 [MB] (21 MBps) [2024-11-29T10:39:58.870Z] Copying: 859/1024 [MB] (22 MBps) [2024-11-29T10:39:59.815Z] Copying: 885/1024 [MB] (25 MBps) [2024-11-29T10:40:00.754Z] Copying: 907/1024 [MB] (21 MBps) [2024-11-29T10:40:01.697Z] Copying: 938/1024 [MB] (31 MBps) [2024-11-29T10:40:02.642Z] Copying: 955/1024 [MB] (16 MBps) [2024-11-29T10:40:03.588Z] Copying: 971/1024 [MB] (15 MBps) [2024-11-29T10:40:04.978Z] Copying: 986/1024 [MB] (15 MBps) [2024-11-29T10:40:05.920Z] Copying: 998/1024 [MB] (12 MBps) [2024-11-29T10:40:06.181Z] Copying: 1015/1024 [MB] (16 MBps) [2024-11-29T10:40:06.754Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-29 10:40:06.461130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.289 [2024-11-29 10:40:06.461191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:27.289 [2024-11-29 10:40:06.461205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:27.289 [2024-11-29 10:40:06.461213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.289 [2024-11-29 10:40:06.461234] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:27.289 [2024-11-29 10:40:06.461822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.289 [2024-11-29 10:40:06.461846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:27.289 [2024-11-29 10:40:06.461855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:34:27.289 [2024-11-29 10:40:06.461868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.289 [2024-11-29 10:40:06.462095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.289 [2024-11-29 10:40:06.462112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:27.289 [2024-11-29 10:40:06.462121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:34:27.289 [2024-11-29 10:40:06.462129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.289 [2024-11-29 10:40:06.462173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.289 [2024-11-29 10:40:06.462183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:27.289 [2024-11-29 10:40:06.462192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:27.289 [2024-11-29 10:40:06.462200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.289 [2024-11-29 10:40:06.462266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.289 [2024-11-29 10:40:06.462282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:27.289 [2024-11-29 10:40:06.462294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:34:27.289 [2024-11-29 10:40:06.462302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.289 [2024-11-29 10:40:06.462317] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:27.289 [2024-11-29 10:40:06.462330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131584 / 261120 wr_cnt: 1 state: open 00:34:27.289 [2024-11-29 10:40:06.462343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:27.289 [2024-11-29 10:40:06.462940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.462948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.462955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.462962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.462969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.462976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.462983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.462990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.462997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:27.290 [2024-11-29 10:40:06.463299] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:27.290 [2024-11-29 10:40:06.463308] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e481aca-6577-46d9-82fb-5f3ca05f74f2 00:34:27.290 [2024-11-29 10:40:06.463316] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131584 00:34:27.290 [2024-11-29 10:40:06.463327] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 5920 00:34:27.290 [2024-11-29 10:40:06.463338] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 5888 00:34:27.290 [2024-11-29 10:40:06.463351] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0054 00:34:27.290 [2024-11-29 10:40:06.463358] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:27.290 [2024-11-29 10:40:06.463366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:27.290 [2024-11-29 10:40:06.463378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:27.290 [2024-11-29 10:40:06.463389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:27.290 [2024-11-29 10:40:06.463395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:27.290 [2024-11-29 10:40:06.463402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.290 [2024-11-29 10:40:06.463409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:27.290 [2024-11-29 10:40:06.463417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:34:27.290 [2024-11-29 10:40:06.463424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.290 [2024-11-29 10:40:06.464893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.290 [2024-11-29 10:40:06.464921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:27.290 [2024-11-29 10:40:06.464929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:34:27.290 [2024-11-29 10:40:06.464937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.290 [2024-11-29 10:40:06.465012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.290 [2024-11-29 10:40:06.465020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:27.290 [2024-11-29 10:40:06.465027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:34:27.290 [2024-11-29 10:40:06.465034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.290 [2024-11-29 10:40:06.471124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.290 [2024-11-29 10:40:06.471150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:27.290 [2024-11-29 10:40:06.471159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.290 [2024-11-29 10:40:06.471169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.290 [2024-11-29 10:40:06.471370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.290 [2024-11-29 10:40:06.471379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:27.290 [2024-11-29 10:40:06.471387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.290 [2024-11-29 10:40:06.471394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.290 [2024-11-29 10:40:06.471448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.290 [2024-11-29 10:40:06.471459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:27.290 [2024-11-29 10:40:06.471471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.290 [2024-11-29 10:40:06.471478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.290 [2024-11-29 10:40:06.471492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.291 [2024-11-29 10:40:06.471500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:27.291 [2024-11-29 10:40:06.471508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.291 [2024-11-29 10:40:06.471515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.291 [2024-11-29 10:40:06.480457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.291 [2024-11-29 10:40:06.480604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:27.291 [2024-11-29 10:40:06.480660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.291 [2024-11-29 10:40:06.480683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.291 [2024-11-29 10:40:06.488146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.291 [2024-11-29 10:40:06.488282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:27.291 [2024-11-29 10:40:06.488336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.291 [2024-11-29 10:40:06.488358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.291 [2024-11-29 10:40:06.488401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.291 [2024-11-29 10:40:06.488435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:27.291 [2024-11-29 10:40:06.488465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.291 [2024-11-29 10:40:06.488483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.291 [2024-11-29 10:40:06.488543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.291 [2024-11-29 10:40:06.488632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:27.291 [2024-11-29 10:40:06.488647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.291 [2024-11-29 10:40:06.488658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.291 [2024-11-29 10:40:06.488717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.291 [2024-11-29 10:40:06.488727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:27.291 [2024-11-29 10:40:06.488738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.291 [2024-11-29 10:40:06.488754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.291 [2024-11-29 10:40:06.488784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.291 [2024-11-29 10:40:06.488817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:27.291 [2024-11-29 10:40:06.488831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.291 [2024-11-29 10:40:06.488842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.291 [2024-11-29 10:40:06.488884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.291 [2024-11-29 10:40:06.488893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:27.291 [2024-11-29 10:40:06.488904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.291 [2024-11-29 10:40:06.488914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.291 [2024-11-29 10:40:06.488962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.291 [2024-11-29 10:40:06.488973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:27.291 [2024-11-29 10:40:06.488981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.291 [2024-11-29 10:40:06.488988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.291 [2024-11-29 10:40:06.489106] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 27.943 ms, result 0 00:34:27.291 00:34:27.291 00:34:27.291 10:40:06 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:29.838 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:29.838 Process with pid 94846 is not found 00:34:29.838 Remove shared memory files 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 94846 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94846 ']' 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94846 00:34:29.838 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94846) - No such process 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 94846 is not found' 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_5e481aca-6577-46d9-82fb-5f3ca05f74f2_band_md /dev/hugepages/ftl_5e481aca-6577-46d9-82fb-5f3ca05f74f2_l2p_l1 /dev/hugepages/ftl_5e481aca-6577-46d9-82fb-5f3ca05f74f2_l2p_l2 /dev/hugepages/ftl_5e481aca-6577-46d9-82fb-5f3ca05f74f2_l2p_l2_ctx /dev/hugepages/ftl_5e481aca-6577-46d9-82fb-5f3ca05f74f2_nvc_md /dev/hugepages/ftl_5e481aca-6577-46d9-82fb-5f3ca05f74f2_p2l_pool /dev/hugepages/ftl_5e481aca-6577-46d9-82fb-5f3ca05f74f2_sb /dev/hugepages/ftl_5e481aca-6577-46d9-82fb-5f3ca05f74f2_sb_shm /dev/hugepages/ftl_5e481aca-6577-46d9-82fb-5f3ca05f74f2_trim_bitmap /dev/hugepages/ftl_5e481aca-6577-46d9-82fb-5f3ca05f74f2_trim_log /dev/hugepages/ftl_5e481aca-6577-46d9-82fb-5f3ca05f74f2_trim_md /dev/hugepages/ftl_5e481aca-6577-46d9-82fb-5f3ca05f74f2_vmap 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:29.838 ************************************ 00:34:29.838 END TEST ftl_restore_fast 00:34:29.838 ************************************ 00:34:29.838 00:34:29.838 real 3m43.920s 00:34:29.838 user 3m34.010s 00:34:29.838 sys 0m10.546s 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:29.838 10:40:08 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:29.838 10:40:09 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:29.838 10:40:09 ftl -- ftl/ftl.sh@14 -- # killprocess 85996 00:34:29.838 Process with pid 85996 is not found 00:34:29.838 10:40:09 ftl -- common/autotest_common.sh@954 -- # '[' -z 85996 ']' 00:34:29.838 10:40:09 ftl -- common/autotest_common.sh@958 -- # kill -0 85996 00:34:29.838 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85996) - No such process 00:34:29.838 10:40:09 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 85996 is not found' 00:34:29.838 10:40:09 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:29.838 10:40:09 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=97144 00:34:29.839 10:40:09 ftl -- ftl/ftl.sh@20 -- # waitforlisten 97144 00:34:29.839 10:40:09 ftl -- common/autotest_common.sh@835 -- # '[' -z 97144 ']' 00:34:29.839 10:40:09 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.839 10:40:09 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:29.839 10:40:09 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.839 10:40:09 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:29.839 10:40:09 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:29.839 10:40:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:29.839 [2024-11-29 10:40:09.077993] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:34:29.839 [2024-11-29 10:40:09.078410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97144 ] 00:34:29.839 [2024-11-29 10:40:09.223144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.839 [2024-11-29 10:40:09.242163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.783 10:40:09 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:30.783 10:40:09 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:30.783 10:40:09 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:30.783 nvme0n1 00:34:30.783 10:40:10 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:30.783 10:40:10 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:30.783 10:40:10 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:31.045 10:40:10 ftl -- ftl/common.sh@28 -- # stores=6f5ea66b-2de9-43f4-9752-6c691b00f3c2 00:34:31.045 10:40:10 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:31.045 10:40:10 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6f5ea66b-2de9-43f4-9752-6c691b00f3c2 00:34:31.306 10:40:10 ftl -- ftl/ftl.sh@23 -- # killprocess 97144 00:34:31.306 10:40:10 ftl -- common/autotest_common.sh@954 -- # '[' -z 97144 ']' 00:34:31.306 10:40:10 ftl -- common/autotest_common.sh@958 -- # kill -0 97144 00:34:31.306 10:40:10 ftl -- common/autotest_common.sh@959 -- # uname 00:34:31.306 10:40:10 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:31.306 10:40:10 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97144 00:34:31.306 killing process with pid 97144 00:34:31.306 10:40:10 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:31.306 10:40:10 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:31.306 10:40:10 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97144' 00:34:31.306 10:40:10 ftl -- common/autotest_common.sh@973 -- # kill 97144 00:34:31.306 10:40:10 ftl -- common/autotest_common.sh@978 -- # wait 97144 00:34:31.567 10:40:10 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:31.828 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:31.828 Waiting for block devices as requested 00:34:31.828 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:31.828 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:32.088 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:32.088 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:37.433 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:37.433 Remove shared memory files 00:34:37.433 10:40:16 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:37.433 10:40:16 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:37.433 10:40:16 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:37.433 10:40:16 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:37.433 10:40:16 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:37.433 10:40:16 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:37.433 10:40:16 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:37.433 ************************************ 00:34:37.433 END TEST ftl 00:34:37.433 ************************************ 00:34:37.433 00:34:37.433 real 16m48.034s 00:34:37.433 user 18m56.204s 00:34:37.433 sys 1m15.594s 00:34:37.433 10:40:16 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:37.433 10:40:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:37.433 10:40:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:37.433 10:40:16 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:37.433 10:40:16 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:37.433 10:40:16 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:37.433 10:40:16 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:37.433 10:40:16 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:37.433 10:40:16 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:37.433 10:40:16 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:37.433 10:40:16 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:37.433 10:40:16 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:37.433 10:40:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:37.433 10:40:16 -- common/autotest_common.sh@10 -- # set +x 00:34:37.433 10:40:16 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:37.433 10:40:16 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:37.433 10:40:16 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:37.433 10:40:16 -- common/autotest_common.sh@10 -- # set +x 00:34:38.819 INFO: APP EXITING 00:34:38.819 INFO: killing all VMs 00:34:38.819 INFO: killing vhost app 00:34:38.819 INFO: EXIT DONE 00:34:38.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:39.392 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:39.392 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:39.392 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:39.392 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:39.653 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:39.914 Cleaning 00:34:39.914 Removing: /var/run/dpdk/spdk0/config 00:34:39.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:39.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:40.176 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:40.176 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:40.176 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:40.176 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:40.176 Removing: /var/run/dpdk/spdk0 00:34:40.176 Removing: /var/run/dpdk/spdk_pid68950 00:34:40.176 Removing: /var/run/dpdk/spdk_pid69103 00:34:40.176 Removing: /var/run/dpdk/spdk_pid69304 00:34:40.176 Removing: /var/run/dpdk/spdk_pid69386 00:34:40.176 Removing: /var/run/dpdk/spdk_pid69409 00:34:40.176 Removing: /var/run/dpdk/spdk_pid69521 00:34:40.176 Removing: /var/run/dpdk/spdk_pid69539 00:34:40.176 Removing: /var/run/dpdk/spdk_pid69716 00:34:40.176 Removing: /var/run/dpdk/spdk_pid69789 00:34:40.176 Removing: /var/run/dpdk/spdk_pid69869 00:34:40.176 Removing: /var/run/dpdk/spdk_pid69963 00:34:40.176 Removing: /var/run/dpdk/spdk_pid70044 00:34:40.176 Removing: /var/run/dpdk/spdk_pid70083 00:34:40.176 Removing: /var/run/dpdk/spdk_pid70114 00:34:40.176 Removing: /var/run/dpdk/spdk_pid70185 00:34:40.176 Removing: /var/run/dpdk/spdk_pid70280 00:34:40.176 Removing: /var/run/dpdk/spdk_pid70706 00:34:40.176 Removing: /var/run/dpdk/spdk_pid70748 00:34:40.176 Removing: /var/run/dpdk/spdk_pid70794 00:34:40.176 Removing: /var/run/dpdk/spdk_pid70805 00:34:40.176 Removing: /var/run/dpdk/spdk_pid70863 00:34:40.176 Removing: /var/run/dpdk/spdk_pid70879 00:34:40.176 Removing: /var/run/dpdk/spdk_pid70937 00:34:40.176 Removing: /var/run/dpdk/spdk_pid70953 00:34:40.176 Removing: /var/run/dpdk/spdk_pid70995 00:34:40.176 Removing: /var/run/dpdk/spdk_pid71013 00:34:40.176 Removing: /var/run/dpdk/spdk_pid71055 00:34:40.176 Removing: /var/run/dpdk/spdk_pid71073 00:34:40.176 Removing: /var/run/dpdk/spdk_pid71200 00:34:40.176 Removing: /var/run/dpdk/spdk_pid71232 00:34:40.176 Removing: /var/run/dpdk/spdk_pid71320 00:34:40.176 Removing: /var/run/dpdk/spdk_pid71481 00:34:40.176 Removing: /var/run/dpdk/spdk_pid71554 00:34:40.176 Removing: /var/run/dpdk/spdk_pid71574 00:34:40.176 Removing: /var/run/dpdk/spdk_pid71994 00:34:40.176 Removing: /var/run/dpdk/spdk_pid72081 00:34:40.176 Removing: /var/run/dpdk/spdk_pid72187 00:34:40.176 Removing: /var/run/dpdk/spdk_pid72223 00:34:40.176 Removing: /var/run/dpdk/spdk_pid72249 00:34:40.176 Removing: /var/run/dpdk/spdk_pid72322 00:34:40.176 Removing: /var/run/dpdk/spdk_pid72936 00:34:40.176 Removing: /var/run/dpdk/spdk_pid72961 00:34:40.176 Removing: /var/run/dpdk/spdk_pid73421 00:34:40.176 Removing: /var/run/dpdk/spdk_pid73508 00:34:40.176 Removing: /var/run/dpdk/spdk_pid73606 00:34:40.176 Removing: /var/run/dpdk/spdk_pid73643 00:34:40.176 Removing: /var/run/dpdk/spdk_pid73668 00:34:40.176 Removing: /var/run/dpdk/spdk_pid73694 00:34:40.176 Removing: /var/run/dpdk/spdk_pid75512 00:34:40.176 Removing: /var/run/dpdk/spdk_pid75627 00:34:40.176 Removing: /var/run/dpdk/spdk_pid75637 00:34:40.176 Removing: /var/run/dpdk/spdk_pid75653 00:34:40.176 Removing: /var/run/dpdk/spdk_pid75694 00:34:40.176 Removing: /var/run/dpdk/spdk_pid75698 00:34:40.176 Removing: /var/run/dpdk/spdk_pid75710 00:34:40.176 Removing: /var/run/dpdk/spdk_pid75755 00:34:40.176 Removing: /var/run/dpdk/spdk_pid75759 00:34:40.176 Removing: /var/run/dpdk/spdk_pid75771 00:34:40.176 Removing: /var/run/dpdk/spdk_pid75810 00:34:40.176 Removing: /var/run/dpdk/spdk_pid75814 00:34:40.176 Removing: /var/run/dpdk/spdk_pid75826 00:34:40.176 Removing: /var/run/dpdk/spdk_pid77222 00:34:40.176 Removing: /var/run/dpdk/spdk_pid77308 00:34:40.176 Removing: /var/run/dpdk/spdk_pid78705 00:34:40.176 Removing: /var/run/dpdk/spdk_pid80445 00:34:40.176 Removing: /var/run/dpdk/spdk_pid80503 00:34:40.176 Removing: /var/run/dpdk/spdk_pid80567 00:34:40.176 Removing: /var/run/dpdk/spdk_pid80671 00:34:40.176 Removing: /var/run/dpdk/spdk_pid80751 00:34:40.176 Removing: /var/run/dpdk/spdk_pid80838 00:34:40.176 Removing: /var/run/dpdk/spdk_pid80896 00:34:40.176 Removing: /var/run/dpdk/spdk_pid80960 00:34:40.176 Removing: /var/run/dpdk/spdk_pid81063 00:34:40.176 Removing: /var/run/dpdk/spdk_pid81149 00:34:40.176 Removing: /var/run/dpdk/spdk_pid81240 00:34:40.176 Removing: /var/run/dpdk/spdk_pid81292 00:34:40.176 Removing: /var/run/dpdk/spdk_pid81356 00:34:40.176 Removing: /var/run/dpdk/spdk_pid81455 00:34:40.176 Removing: /var/run/dpdk/spdk_pid81541 00:34:40.176 Removing: /var/run/dpdk/spdk_pid81627 00:34:40.176 Removing: /var/run/dpdk/spdk_pid81684 00:34:40.176 Removing: /var/run/dpdk/spdk_pid81753 00:34:40.176 Removing: /var/run/dpdk/spdk_pid81847 00:34:40.176 Removing: /var/run/dpdk/spdk_pid81932 00:34:40.176 Removing: /var/run/dpdk/spdk_pid82018 00:34:40.176 Removing: /var/run/dpdk/spdk_pid82070 00:34:40.176 Removing: /var/run/dpdk/spdk_pid82143 00:34:40.176 Removing: /var/run/dpdk/spdk_pid82213 00:34:40.176 Removing: /var/run/dpdk/spdk_pid82282 00:34:40.176 Removing: /var/run/dpdk/spdk_pid82374 00:34:40.176 Removing: /var/run/dpdk/spdk_pid82454 00:34:40.176 Removing: /var/run/dpdk/spdk_pid82555 00:34:40.437 Removing: /var/run/dpdk/spdk_pid82613 00:34:40.437 Removing: /var/run/dpdk/spdk_pid82677 00:34:40.437 Removing: /var/run/dpdk/spdk_pid82741 00:34:40.437 Removing: /var/run/dpdk/spdk_pid82814 00:34:40.437 Removing: /var/run/dpdk/spdk_pid82906 00:34:40.437 Removing: /var/run/dpdk/spdk_pid82992 00:34:40.437 Removing: /var/run/dpdk/spdk_pid83129 00:34:40.437 Removing: /var/run/dpdk/spdk_pid83398 00:34:40.437 Removing: /var/run/dpdk/spdk_pid83423 00:34:40.437 Removing: /var/run/dpdk/spdk_pid83868 00:34:40.437 Removing: /var/run/dpdk/spdk_pid84044 00:34:40.437 Removing: /var/run/dpdk/spdk_pid84134 00:34:40.437 Removing: /var/run/dpdk/spdk_pid84238 00:34:40.437 Removing: /var/run/dpdk/spdk_pid84275 00:34:40.437 Removing: /var/run/dpdk/spdk_pid84300 00:34:40.437 Removing: /var/run/dpdk/spdk_pid84602 00:34:40.437 Removing: /var/run/dpdk/spdk_pid84640 00:34:40.437 Removing: /var/run/dpdk/spdk_pid84691 00:34:40.437 Removing: /var/run/dpdk/spdk_pid85056 00:34:40.437 Removing: /var/run/dpdk/spdk_pid85200 00:34:40.437 Removing: /var/run/dpdk/spdk_pid85996 00:34:40.437 Removing: /var/run/dpdk/spdk_pid86112 00:34:40.437 Removing: /var/run/dpdk/spdk_pid86277 00:34:40.437 Removing: /var/run/dpdk/spdk_pid86358 00:34:40.437 Removing: /var/run/dpdk/spdk_pid86710 00:34:40.437 Removing: /var/run/dpdk/spdk_pid86985 00:34:40.437 Removing: /var/run/dpdk/spdk_pid87326 00:34:40.437 Removing: /var/run/dpdk/spdk_pid87476 00:34:40.437 Removing: /var/run/dpdk/spdk_pid87619 00:34:40.437 Removing: /var/run/dpdk/spdk_pid87660 00:34:40.437 Removing: /var/run/dpdk/spdk_pid87891 00:34:40.437 Removing: /var/run/dpdk/spdk_pid87911 00:34:40.437 Removing: /var/run/dpdk/spdk_pid87947 00:34:40.437 Removing: /var/run/dpdk/spdk_pid88167 00:34:40.437 Removing: /var/run/dpdk/spdk_pid88387 00:34:40.437 Removing: /var/run/dpdk/spdk_pid89085 00:34:40.438 Removing: /var/run/dpdk/spdk_pid89964 00:34:40.438 Removing: /var/run/dpdk/spdk_pid90664 00:34:40.438 Removing: /var/run/dpdk/spdk_pid91513 00:34:40.438 Removing: /var/run/dpdk/spdk_pid91638 00:34:40.438 Removing: /var/run/dpdk/spdk_pid91716 00:34:40.438 Removing: /var/run/dpdk/spdk_pid92076 00:34:40.438 Removing: /var/run/dpdk/spdk_pid92124 00:34:40.438 Removing: /var/run/dpdk/spdk_pid92693 00:34:40.438 Removing: /var/run/dpdk/spdk_pid93086 00:34:40.438 Removing: /var/run/dpdk/spdk_pid93909 00:34:40.438 Removing: /var/run/dpdk/spdk_pid94033 00:34:40.438 Removing: /var/run/dpdk/spdk_pid94059 00:34:40.438 Removing: /var/run/dpdk/spdk_pid94123 00:34:40.438 Removing: /var/run/dpdk/spdk_pid94178 00:34:40.438 Removing: /var/run/dpdk/spdk_pid94232 00:34:40.438 Removing: /var/run/dpdk/spdk_pid94421 00:34:40.438 Removing: /var/run/dpdk/spdk_pid94491 00:34:40.438 Removing: /var/run/dpdk/spdk_pid94569 00:34:40.438 Removing: /var/run/dpdk/spdk_pid94625 00:34:40.438 Removing: /var/run/dpdk/spdk_pid94665 00:34:40.438 Removing: /var/run/dpdk/spdk_pid94714 00:34:40.438 Removing: /var/run/dpdk/spdk_pid94846 00:34:40.438 Removing: /var/run/dpdk/spdk_pid95036 00:34:40.438 Removing: /var/run/dpdk/spdk_pid95287 00:34:40.438 Removing: /var/run/dpdk/spdk_pid95852 00:34:40.438 Removing: /var/run/dpdk/spdk_pid96489 00:34:40.438 Removing: /var/run/dpdk/spdk_pid97144 00:34:40.438 Clean 00:34:40.438 10:40:19 -- common/autotest_common.sh@1453 -- # return 0 00:34:40.438 10:40:19 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:40.438 10:40:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:40.438 10:40:19 -- common/autotest_common.sh@10 -- # set +x 00:34:40.438 10:40:19 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:40.438 10:40:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:40.438 10:40:19 -- common/autotest_common.sh@10 -- # set +x 00:34:40.699 10:40:19 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:40.699 10:40:19 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:40.699 10:40:19 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:40.699 10:40:19 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:40.699 10:40:19 -- spdk/autotest.sh@398 -- # hostname 00:34:40.699 10:40:19 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:40.699 geninfo: WARNING: invalid characters removed from testname! 00:35:07.283 10:40:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:10.589 10:40:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:12.505 10:40:51 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:15.065 10:40:54 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:17.685 10:40:56 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:20.232 10:40:59 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:22.781 10:41:02 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:22.781 10:41:02 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:22.781 10:41:02 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:22.781 10:41:02 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:22.781 10:41:02 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:22.781 10:41:02 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:22.781 + [[ -n 5772 ]] 00:35:22.781 + sudo kill 5772 00:35:22.793 [Pipeline] } 00:35:22.813 [Pipeline] // timeout 00:35:22.819 [Pipeline] } 00:35:22.837 [Pipeline] // stage 00:35:22.842 [Pipeline] } 00:35:22.860 [Pipeline] // catchError 00:35:22.871 [Pipeline] stage 00:35:22.874 [Pipeline] { (Stop VM) 00:35:22.888 [Pipeline] sh 00:35:23.173 + vagrant halt 00:35:25.725 ==> default: Halting domain... 00:35:32.323 [Pipeline] sh 00:35:32.714 + vagrant destroy -f 00:35:35.257 ==> default: Removing domain... 00:35:36.213 [Pipeline] sh 00:35:36.497 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:36.508 [Pipeline] } 00:35:36.524 [Pipeline] // stage 00:35:36.529 [Pipeline] } 00:35:36.543 [Pipeline] // dir 00:35:36.548 [Pipeline] } 00:35:36.562 [Pipeline] // wrap 00:35:36.569 [Pipeline] } 00:35:36.582 [Pipeline] // catchError 00:35:36.593 [Pipeline] stage 00:35:36.595 [Pipeline] { (Epilogue) 00:35:36.608 [Pipeline] sh 00:35:36.895 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:42.189 [Pipeline] catchError 00:35:42.191 [Pipeline] { 00:35:42.207 [Pipeline] sh 00:35:42.493 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:42.493 Artifacts sizes are good 00:35:42.504 [Pipeline] } 00:35:42.521 [Pipeline] // catchError 00:35:42.537 [Pipeline] archiveArtifacts 00:35:42.546 Archiving artifacts 00:35:42.681 [Pipeline] cleanWs 00:35:42.716 [WS-CLEANUP] Deleting project workspace... 00:35:42.716 [WS-CLEANUP] Deferred wipeout is used... 00:35:42.742 [WS-CLEANUP] done 00:35:42.744 [Pipeline] } 00:35:42.764 [Pipeline] // stage 00:35:42.769 [Pipeline] } 00:35:42.786 [Pipeline] // node 00:35:42.792 [Pipeline] End of Pipeline 00:35:42.836 Finished: SUCCESS